Hacker News new | past | comments | ask | show | jobs | submit login

Hm, neat. I notice about the same result -- 0.1 is quite obvious.

If you want to blind A/B test yourself, run this:

    DELAY='0.1'; VALUES=(0 0); VALUES[$((RANDOM % 2))]=$DELAY; alias test_a="sleep ${VALUES[0]}"; alias test_b="sleep ${VALUES[1]}"
Adjust DELAY as you want, and then use test_a and test_b and see if you can guess which is which, then run alias to see for sure.



I was suspicious that there might be some artifacts of terminal updating that are amplifying the effect in that test, so I whipped up a little graphical test:

https://jsfiddle.net/zs8bncxj/1/

I can still get it right pretty much 100% of the time, but the difference does feel a lot more subtle than what I was seeing in iTerm.

(If you change the delay, you need to hit randomize to make it take.)

Edit: Also we should keep in mind that we're not testing the latency we set in either test. We're testing that delay plus the delay from I/O. So 100ms might be indistinguishable. But by the time we pile everything on, we're getting closer to 200.

The result is the same as far as UI design is concerned; don't assume that you can get away with 100ms; most of that leeway has already been wasted by the OS and hardware.


Adding step="0.01" to the input allows stepping down in 10ms increments.

Personally I can distinguish every time at 0.06.

I find it difficult to believe the rest of the hardware loop has 40ms latency, it would be difficult for smooth rendering to occur if it did.

I suspect that part of this is 'training' yourself but also what the researchers meant. They may very well have meant that above 100ms is jarring and noticed as a delay but below 100ms is not seen as a delay rather than truly being unable to notice in an A/B test.

I suspect this test would be more difficult if it sometimes did A/A and B/B.


>I find it difficult to believe the rest of the hardware loop has 40ms latency, it would be difficult for smooth rendering to occur if it did.

Why would I/O latency have any effect on the smoothness of rendering? Are you sure you aren't thinking of throughput?

Remember: TFA just told us that many keyboards on their own add more than 40ms of latency. I definitely wouldn't have guessed that, so I'm very reluctant to entertain any certainty about the rest of the system having low latency.


neat, 100ms is blatantly obvious, I can't imagine anyone not discerning that one.

I can get down to 24ms but no less... the weird thing is that 24ms is still completely obvious to me (clearly shorter but obvious in comparison to no delay), with a single test I can see which variable has the delay every time, but 1ms less and I can't... which makes me suspect it's being quantised due to any of the various things in between sleep and the output, display, driver, X, terminal emulator, CPU etc.

With that it's actually possible that my 24ms is larger than 24ms and is also being quantised to a larger duration (but not larger than 100 for sure).

It would be interesting to be able to test with some dedicated hardware.


25ms is 1.5 * (1/60)... Is your monitor running at 60Hz?


Your probably right, it's an old TN panel in a 10yr old laptop... I'm gona have to steal someones shiny modern IPS in a minute :P (I know they are generally slower response but it's 10 years newer so you never know) [edit] No IPS still super slow. How common are >60Hz computer displays these days?


Common for gamers, relatively unknown for everyone else. They typically go up to 144Hz.


There are newer ones that do 240hz, but they are TN.


If you want to have some fun, try testing your reaction time at https://www.humanbenchmark.com/tests/reactiontime.

It's not quite a keyboard delay test but more of a general human reaction time test.

The best I can do is 160ms after 5 tries on a 2560x1440 60hz IPS panel.


I can get it down to 1ms, so there's probably something wrong with my setup. (I just look for how the cursor moves; on the delayed one, I can discern the presence of the cursor on the next line for a split second.) Maybe monitor refresh rate, or the minimal resolution sleep can handle?

I'm using Alacritty, which is supposed to be really fast and everything!


Likely forking sleep takes time.


I would be extremely surprised if fork on a unix-like takes 1ms.



Both of those appear to indicate that forking is expensive when you need to copy unusually large amounts of memory (which makes sense). In this context, I would expect reasonable/small memory use and so fast forking.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: