Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> at the transport level, there’s no way to decouple data rate from latency because it’s not allowed to drop anything.

Yes, fundamentally, TCP is not designed for applications with controlled latency. I only used TCP as an example here because TCP is supposed to provide a reasonable congestion and flow control mechanism at the protocol level, its behavior under congestion is well-specified, and suitable as an example. Unlike UDP, which depends on applications, as you said,

> It’s generally mote complicated to use because all of the flow control needs to happen at the application layer.

I'm actually aware of some UDP programs that deliberately send multiple packets in a row without any regards of flow control, winning extra throughput at the expense of other programs.

On the other hand, the TCP specification includes Slow Start, Sliding Window, Congestion Control algorithms, and recently, Explicit Congestion Notification. A main design goal is the detection of available bandwidth in an end-to-end manner, so that the sender does not send at an unnecessarily high data rate which the network or the receiver is incapable of processing. Packet drops is usually required in TCP as a feedback signal to enable detection of mismatched bandwidth. It's not always optimal (a well-known example is wireless networks - a dropped packet often does not indicate mismatched bandwidth), and it doesn't have to use dropped packets as the feedback signal (TCP Vegas & TCP BBR use latency, ECN uses a status bit), but in practice, the vast majority of systems (BIC, Cubic, Veno, CTCP) do require it, and it represents 80%+ of the installation.

> UDP is the latency-prioritized transport protocol for the internet. It drops packets that it can’t handle because they are expected to contain out-of-date information anyway.

It's supposed to work in this way. Unfortunately, two issues exist, the first challenge is applying the congestion control techniques invented for Bufferbloat to UDP, this needs to be done in a per-app basis. Another issue is, Bufferbloat is not simply a breakdown of flow control, but the existence of the large, uncontrolled FIFO buffer in networking programs, drivers and devices without queue management with latency in mind, which leads to an inherent latency in a saturated link. It's equally detrimental to TCP and UDP, as it takes an unusually long time for any packet to leave the FIFO queue. Breaking protocol-level flow control is only an additional effect that keeps adding fuel to the fire.

An earliest example was the large buffer in TX queue in DOCSIS modems, discovered, documented and reported by Jim Gettys, and fixed in a later DOCSIS specification. Another was the uncontrolled TX queue with 2000 frames in many Ethernet drivers that plagued early Linux kernels. At that time, fixing the problem in a LAN was as easy as, "ifconfig eth0 txqueuelen 1", effectively disabling it. Later, queue management was introduced and it's no longer a problem.

But many devices are still broken, the only way to stop it, is to ensure the uncontrolled buffer never have a chance to be filled, either by upgrading the upstream bandwidth to be greater than one could ever use, or by traffic policing your own TX bandwidth. Then you can only cross your finger and hope there isn't another uncontrolled buffer in another piece of network equipment at downstream (usually works at home, where there's only a router and a PC that runs the latest Linux/BSD systems). And even when it's the case, TX buffers on the opposite direction are not under your control, and ISP edge can be a place for them to exist.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: