Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"TCP has an option you can set that fixes this behavior called TCP_NODELAY"

That fixes nothing. Now you are sending too many small packets using too many syscalls. Just like UDP, buffer in user space, send in one go. If you do that, TCP_NODELAY makes no difference. (The exception is user input, if you want to send those as they happen, use TCP_NODELAY, but think about the why ... it has little to do with what this article is talking about.)

Games likely send data only around 25 times per second, and ping is likely < 50ms. Waiting on a dropped packet and the delay it causes is unnoticeable. Added that clients will need some kind of latency compensation and prediction, independent of the TCP/UDP choice. Delays and then bursts of 100ms or such are doable.

The problem starts when the connection stalls for more than 100ms, especially in high bandwidth games. During the stall both behave the same. After the stall, TCP will be playing catchup and wasting more time receiving outdated data, and handing it to user space in order. UDP just passes on what is received, with a lot less catching up, and maybe some dropping of packets.

But gameplay has been degraded in both cases. UDP just has a higher chance of masking and shortening degradation more.

Anything more than that is basically cargo-culting, like this article.



I will preface this with the fact that the author worked on at least one such FPS game (Titanfall 2) and probably know what he is talking about.

He offered one solution to fix one behavior of TCP, the behavior being that if the data written to the buffer is not big enough, TCP might hold it. Setting TCP_NODELAY forces the protocol to avoid holding data. But this is only written as a note to coders who might think TCP_NODELAY will fix TCP for action games, but it doesn't actually, because the protocol has other characteristics that are undesirable in that type of games.

Moreover, you write this: waiting on a dropped packet and the delay it causes is unnoticeable. Unless you have a FPS game with TCP as its network protocol to validate this claim, I call bull. Many network programmers recommend against TCP. This is probably not simple cargo-culting.


"TCP_NODELAY forces the protocol to avoid holding data"

That is the misunderstanding. It will send one non full packet, and only then "hold on to data" until the next packet is a full packet. But that condition will reset when all data has been acknowledged. If you buffer in userspace and write data in one go, at 25 network fps, you basically never trigger the condition where TCP_NODELAY makes a difference.

It is not nagle's algorithm, but a variant of minshall-nagle, what modern systems use.

There is a reason why FPS games use UDP, but that discussion should not start with TCP_NODELAY. That one is misused enough ... often when it fixes anything, it is because it masks a real underlying problem, had you fixed that problem, your system would react much better under stress or bad networks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: