Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't disagree with you, but there is multi-ms latency typically in local setups as well, so certainly some latency must be tolerated by basically all the games. Carmack had an interesting point about this: http://superuser.com/questions/419070/transatlantic-ping-fas...


You know what, there can be long and drawn-out arguments about this, or we test it.

I've been wanting to give this one a real try: http://store.steampowered.com/app/369530/

Sadly my laptop sucks too much for it. It can run it at ~40-60 fps, but i really feel the input latency already.

I'll try and see what it feels like with their AWS AMI, playing on a 25mbit line from hannover, to an EC2 instance in Frankfurt.

Edit: Turns out there's an issue with my laptop hardware, so results from me will be delayed until a fix can be found.


Sure, but as they allude to in that article local latency is consistent(dropping a frame in games is considered a cardinal sin for anything competitive).

Game design is built with these latencies into account, if people were to see the raw network updates they'd be shocked at how jerky and unplayable it is.

Generally if you're building a game to be tolerant of network latency you want a design where you're trying to be predictive than reactive. The former tolerates latency well since you're trying to guess where something will be in the future. The latter has latency in the feedback loop which is incredibly painful without some complex(time-rewind) mitigation measures.


The games you're talking about (competitive games, games where 1 frame drop is unacceptable) are a very small & niche part of the market. Look specifically at modern consoles- every console has builtin architecture that leads to a minimum input latency of 67ms. The game can add more latency (and most do), and a lot of popular TVs add more latency on top of that. Here's a database of game input latencies, it shows that even a lot of successful games have latencies of above 100ms: http://www.displaylag.com/video-game-input-lag-database/

So if you're talking about the "highly latency sensitive" players, then by definition those are only PC gamers, and only a small fraction of those players. The addressable market of gamers who tolerate 100+ ms of latency is very large, definitely large enough to build a business on top of.


One thing you have to take into account is that the vast majority of modern console gamers have never experienced PC gaming @ 120hz on a CRT connected to a good old 15-pin VGA connector...

If you were to take a console gamer that was just fine playing a game at 100ms+ latency, and let them play the same game, while magically removing 95% of the latency, they would probably be amazed, and never want to go back to their previous setup with horrendous latency.


So the point I was making(and a little poorly on re-reading) is that latency is fine up to a point(about 200ms) as long as it's consistent with no jitter. Input systems -> rendering fall under that category, you don't have your TV dropping frames or delivering them late. You can adjust for a constant latency and "lead" it, which is why game design that's predictive works so well in multiplayer.

Latency sensitive players are actually large part of the market, any action based multiplayer game will fall under that category, which includes FPS games which make up a large portion of game revenue. About 10% of the market makes 90% of the revenue so missing certain use cases excludes a large chunk.


The normal packet jitter on my connection is between 2 and 20 milliseconds. You could stream games to me with a fixed network delay of 120ms (plus 80ms for input/rendering) and there wouldn't be dropped or late frames.


> normal packet jitter

What's the 99th percentile for that? How about packet-loss?

None of this is new stuff in gamedev, we've been build action games successfully since the days of 28.8 modems and 200ms pings.

Maybe that makes me a bit of an entrenched player(which is a poor position to argue on HN) but I've yet to see anything fundamental in these technologies that will address the latency issues in the same way you can with an interpolated client-side simulation.


Checking this ping I have open, the 99th percentile jitter is less than 20ms, and packet loss is about .2%

It doesn't have to be as good, it just has to reach a certain level. And for a lot of games it works fine to have a bit of lead time on controls.


I'm working on a game with my own GL engine, and i found locally even <5ms jitter is noticable, if only because the jitter occasionally causes a "frame time overflow", leading to a skipped frame or a frame being displayed twice.


No, no, that's not what I'm saying.

You set up a fixed delay, that is large enough to contain your jitter. It doesn't matter if one packet takes 80ms and the next takes 105ms when you don't display it until the 120ms mark. There will be no visible jitter.


These are good points, but I'd like to add that unless someone is dealing with extreme latency (ie: > 150~200ms), jitter is typically a bigger problem.

If you have a consistent latency of, say, 100ms, you can account for that. People adjust. They can plan for being a little slower to react.

On the other hand, if you're playing a shooter and you're constantly bouncing from 20ms to 100ms, you're likely going to feel pretty unhappy. You'll be forced to constantly adjust for actions happening slightly too fast or too slow depending on the direction/severity of the jitter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: