Interface performance is one of the strangest problems to have in this age of crazy processing power but it is extremely common.
Some of the delay is just plain silly and avoidable, like having long and synchronous opening animations in response to an action, which only serve to waste the user’s time. (Oh how I love being on a web site like AT&T and watching their JavaScript poorly zoom open a blank box from the center of the page for 2 whole seconds, when I KNOW they could just show me the damned page already.)
In other cases, the source of the slowdown is less clear. Is a physical device just not delivering its signals any sooner?
I’ve played games where you have to walk to a very precise spot, hit a button, and wait literally one whole second before ANY response is visible onscreen or in audio. (And if it turns out you didn’t really take the action you thought you did, you have to walk in circles to try a slightly different spot, and wait again). Why should that ever be the case? How can a super-fast console not immediately display something or play some sound to show that you took the action?
At the hardware level, the author of BSNES recently wrote up an excellent rant on sources of latency in modern machines https://byuu.org/articles/latency/
But, most of what you are talking about is software latency. At 1/30th of a second each, software pipelining systems seem cheap individually but pile up very quickly. Hit a button, read the button, react in AI, react in animation, react in physics, react in graphics, process in the GPU, process in the display device. These can easily add up to 5/30ths of a second with poorly planned software. In the middle of all that, the animation and audio has aesthetic requirements for smooth transitions that can insert a 1/2 second lag in the middle of that process. Now we're up to 20/30ths.
Regarding animations: I've been in convos with managers requesting character animations to be "Smoother, but more poppy!" because of the conflicting needs of aesthetics and control latency. The best compromise I've found is to design a smooth transition, but have the underlying representation pop and the visuals skip immediately to mid-way through the animation.
I'm not sure what you are referring to about skipping halfway through a transition, but a sinusoidal transition is almost always what you want anyway. Possibly a power distribution, one. That is, it should not just evenly slide from point A to B. You want some acceleration.
Acceleration is important, but the “pop” that they wanted was probably some form of the “squash and stretch” effect that we all subconsciously associate with good quality animation[1].
The conflict is that in the real world, people have momentum. They take a bit of time to change their velocities. In video games, we are accustomed to sprites that can instantly change velocity and sometimes go from motionless to moving significant distances in a single frame.
The publisher wanted both simultaneously. They wanted the human player character to instantly change direction in response to controls. But, they also wanted the character to move like a semi-realistic human who has momentum and takes a while to change directions instead of like a sprite that instantly changes direction. :/
Why is it "curious"? It's a direct consequence of attempting to emulate how non-rigid bodies (including people) move in the real world.
For a contrast of what happens when there's no squash and stretch in animation, take a look at pretty much everything ever made by Hanna-Barbera before 1990. Everything remains almost pathologically on-model all the time to reduce animation costs.
Apologies for missing this. I took the claim to mean not just animated movies, but animations of our devices. I am specifically remembering the silly animation that ubuntu used to have where a window would shimmer and shake as you moved it around. Or how it will "pop" onto the corner of the screen.
Windows that are flimsy are just annoying to me, which is why I would find the view that they are quality curious.
More realistic animation is typically described as more realistic. Not "popping and snappy."
I can see the it-adds-up argument but there are also plenty of techniques to deal with that. (Maybe it is an education issue for developers.)
For instance, in a lot of cases, a human cannot reasonably observe a particular type of change on every frame so you can skip frames. What I mean is, suppose you have tasks A, B, C and D to perform “each frame”: you might be able to perform tasks A and B on odd-numbered frames and C and D on even-numbered frames, with the user no wiser, as long as the result seems fine.
Another technique is to prioritize the start and finish but not in-between. Often, intermediate frames are relatively crappy from a “niceness” or even correctness standpoint, and nobody really notices because the frames go by quickly. As long as the end frame looks as nice as possible and everything is in exactly the right place, you can get away with a lot of short-cuts for the steps taken to get there.
> Another technique is to prioritize the start and finish but not in-between.
The problem with techniques like these is that it's almost impossible to fully generalize them (e.g. in the case of intermediate frames, if some of them are really wrong then you get sudden clipping or jumpiness).
So if your 'fast' technique only works for a certain set of parameters, then you have just introduced an implicit dependency into your system: things are fast enough while the app looks like X, but go a bit beyond that and it suddenly breaks.
In games, everything works in frames. Usually, AI, animation and physics can be completed in a single frame. But, it is very common to pipeline graphics to a separate thread that runs a frame behind everything else. The GPU frame and display frame (multiple frames on some TVs...) are pretty much impossible to eliminate.
Ideally, a game would sample input multiple times per frame, go with wide parallelism for every step (very difficult for graphics until DX12/Vulkan came along), start some GPU work before physics is completed, render in less than 1/60th of a second and the users would enable no-processing "game mode" on their lag-optimized TVs. But, that's all not common practice.
You can solve any problem by adding more software and layers of abstraction, except the problem of too much software. That's the state we're in now.
The BBC micro could have a word processor in ROM that would boot almost instantly and responded to keypresses immediately. This was because the software was written in assembler and had to fit in a small ROM. The choice of using a TV system running (say) Android and a web browser means that, although the software is slightly easier to write and the processor is 100 times faster, it has to execute 10,000 times more machine instructions in order to render the UI.
This is partly why people like Maciej campaign against multi-megabyte text pages. Another way is possible.
Especially the lag that you see on some brand new cars, only BMW and Audi seem to have lag free interface, but anything else that involves touch interface is just horrid! I've recently sat in my friend's brand new Honda SUV and the interface lag is just plain silly, for a car that costs $30,000+. Why is that?
How did the quality control passed it? The lag on some of these car infotainment system just to change the sound is abysmal, someone definely saw that and should have said something, you paying $30,000+ for something that takes 4 seconds to respond to a music volume change. Whoever is responsible for that should not work there...
The quality control came last in the process, so when they 'finished' not long before the delivery date is due, QA gets a ton of political pressure to not make the date slip.
That's why test-driven design is valuable -- you iterate while testing.
If only we'd work together then the car could have a common bus system and you'd just swap out the control console on the front and choose your level of [stupid, annoying, distracting] graphics and what have you.
There's no competition in these systems because people choose the car and get lumbered with the UI on the console. Kinda like if houses had unique electrical systems and you couldn't change the white-goods.
Cars already have CANBUS as a standard thing so it shouldn't be difficult.
I think the CarPlay/MirrorLink/Android Auto thing is probably a better model though. Make the console dumb and let me connect my upgraded-every-year phone that's far more powerful.
Believe it or not, the software quality in these things is often quite cruel, and even a mediocre javascript framework might perform awesome against it. I've seen enough things things like handcoded UI frameworks in C++ (in order to be fast), and then doing things like blocking network calls on the main thread from there.
Some newer systems are based on QT or Android. These typically have better performance, because the underlying frameworks have at least a decent design.
I wish there were car reviews out there that take software quality in account in particular for things like lane keep assist. When I was in the market for a car the reviews that I have seen only mention if they have the feature, not how well it actually works.
For what it's worth, there's a specific distinction for software systems like lane assist. The bar for "working" is so high that if it doesn't essentially work perfectly we can't say it works at all.
We would expect reviews to point out if a feature such as lane assist fails or has noteworthy failures (such as rapid weaving inside the lane) but maybe not so much if it works properly.
I currently have a Honda Civic with lane keep assist. It doesn't slow down before curves and disengages frequently. Tesla's autopilot works much better from what I have seen.
I have a 2016 Volkswagen (Tiguan II), whose Lane Assist also leaves a lot to be desired. It handles nearly straight roads with no traffic quite ok, but it would surely crash the car on nearly every obstacle (lane narrowing/widening, obstacles, tighter corners, ...) without manual intervention.
That's a reason why I don't believe in seeing safe autonomous cars during the next few years at all. But maybe Tesla is that much better - haven't ridden one.
I recently got a Mazda3, their newer MazdaConnect system is running an iMX6 (dual CortexA9 w/GPU and video accelerators) and uses Opera as the interface. All of the core UI is written in Javascript.
It was designed by Johnson Controls (JCI) but the IVI group was recently sold to Visteon, which probably explains the sudden lack of momentum from Mazda on new features (like, cough, Carplay...which was announced 2 years ago and never showed up).
Most of the people hacking on the unit hang out at mazda3revolution.com. Here's a page indexing their work so far:
I can't speak for the quality of Mercedes interface (and this is obviously marketing for non-programmers) but LOC seems like an odd thing to be emphasising.
These things are quite random, because most likely they counted also all dependencies in - including LoC for OS kernel, all libraries, etc. And how much LoC is boost alone? :D
I always want to turn it “off” because it’s too distracting, and the only way is to turn the Brightness down to off. It goes like this: BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, oh now it’s off. Then 5 minutes later, the airline starts its welcome mostly-advertising video which TURNS THE DAMNED THING BACK ON AT FULL BRIGHTNESS. Then it resumes DirecTV at which point I have to say: BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, annnnnnnnd now it’s off.
This causes people to press harder in reaction and then they are bouncing the person in front of them's head. It's comical if it's not happening to you.
I recently took a flight with Virgin Atlantic and was actually pleasantly surprised. It still wasn't perfect, but it was by far the best I've seen from in-flight entertainment.
I have a 2016 car that I hate the screen's ui/ux... it's laggy, and looks like something from 2007. That doesn't even get into the fact that the onboard wireless is 3G, not LTE on something in 2016.
Really makes me wish the whole thing was more hackable.
Our 2013 Honda Accord needed several updates just for the radio to work reliably. You would go to change a channel, the screen would blank out and come back on with all presets set to 97.3. Of course, if you "rebooted" the car, they came back correctly.
And the lag when hitting a tough screen button, incredibly frustrating. I want my buttons!
Actually, a lot of the Japanese car companies are built on vertically integrated companies where a parent (usually a bank) company owns both the primary company (Honda) and a set of complementary companies that provide things like Windshields or Tires. It's called a Keiretsu.
Then maybe they should keiretsu their way to software because whether they're a software company or not they make shitty software. And either they can make good software or they can buy good software, but if they make shitty software they're a software company, just a shitty software company.
I've worked at several companies where at least one manager/exec says "We are not a software company, we're a ___(their core product/service)___ company".
If an organization creates and utilizes software as part of their ongoing concern...is at some level, a software company.
Indeed. Yahoo pretending it wasn't a software company is what lead to a billion user accounts getting compromised in the largest data breach in history.
> like having long and synchronous opening animations in response to an action, which only serve to waste the user’s time.
Oooooh, don't get me started on DVD menu screen navigation. What shambling, drooling idiot decided that it was critical for me to watch an unskippable spoiler-rich montage of scenes from the entire length of the movie before I can click "Play Film", followed by another unskippable montage afterward? Insanity.
I have a Siemens washing machine, and the interface have a latency of >500 ms. How they fucked it up is way beyond me. It consists of nothing more than a rotary switch, four buttons and three 7-segment LED displays.
I've played with the thought of disassembling the firmware just to see how they fucked this up this bad. I could never make something this unresponsive even if I tried.
It's utterly fascinating and pisses me of every time I do the laundry.
Why you can't just toss the clothes in, close the door and walk away is beyond me. Wish I could empty the whole 150oz of Tide into the machine and have it dispense over 96 loads too.
I've thought this many times, and I suspect it's somewhat complicated, engineering-wise — but solvable.
For one, detergent comes in at least three forms: Powder, liquid and those little plastic pouches. Powder would be pretty easy (but the dosage would be brand-specific) and liquid would be messy (flow rate would be a challenge).
The easiest way would be if all machines could accept a "standard pellet" which gets loaded in some kind of completely fool-proof way so the machine cannot mechanically choke on them, ever, or accidentally add too few/many to a load.
Same thing with dishwashers.
As for why you can't just close the door and walk away: Setting the program is an important step in washing clothes. Modern machines do have a single "start" button.
GE has both under the name "SmartDispense". They use a peristaltic pump to dispense liquid detergent. I owned the dishwasher for a few years and enjoyed the convenience.
In 20 years, I've owned 3 washing machines. I dealt with maybe 3 or 4 malfunctions over those 20 years, and each of them required only buying some spare part and installing it, or cleaning something inside.
Surely this kind of reliability is a good trade-off Vs having to pour some detergent for each wash?
Key-press response time can frequently be more than 30 seconds, depending on what the action is. Of course, you might say, that is because of bluray bloat on more recent disks, but I can assure you that its been that way from the day I purchased it. Sure some disks were better than others, but the multi minute boot-up, disk load times, player menu popup times, etc have been there since the beginning. I used to use it as a demo against my HD DVD player of why bluray wasn't ready for primtime ,and it was a 3rd generation bluray player.
Given it's a BDP-series player, if you were interested you may be able to get a Linux shell of some kind on it and find out what's taking so long to run on it.
There's a whole lot of Phillips and Sony players that are based off some ancient Mediatek SDK.
Edit: Wow, that's old: Sigmatek, not Mediatek. A makefile in the GPL source suggests there's a similar Pioneer player somewhere, too.
> like having long and synchronous opening animations in response to an action
I love animations when they make the UI more understandable. I can't stand them when they are more than a couple hundred milliseconds though. I don't even think "synchronous" when I think of animations. That sounds terrible. If they are quick animations it doesn't seem as big of a deal as the long running ones though.
For me the most insulting part is the inversion of priorities in these designs. The top priority of a UI designer SHOULD be to make the user as productive as possible, yet making me wait for something that is by definition not necessary (like an animation) is nonsensical. A related backward trend is this idea of pushing something in front of my face as a modal panel, with complete disregard for the fact that I was working on something and am now (a) distracted, (b) completely unable to continue doing what I chose to do, and (c) will have problems even after the modal goes away, taking extra time to figure out how to refocus on whatever I was originally trying to do before being interrupted.
That is even true for actions I triggered, for example: I click 4 icons on my desktop consecutively and thus 4 binaries start in the background. The order in which they appear is determined by their startup time: One pops up after another. BUT: whenever one is open and I am USING it, all the ones coming up should not pop up over the current one! This bad behaviour even happens in Windows 10 and many desktop environments.
To UI designers:
Have some consideration for the f user!
As far as the less-clear cases, this is basically CAP theorem, with some physics thrown in for good measure. In some sense there is always a partition of some length between two points, thanks to the speed of light: the theoretical limit of information propagation through space.
So in the presence of this delay "partition," you have three choices, really, and the choice you make depends on the application.
A) You can choose to be available and responsive. Show the user feedback immediately and never concern yourself with global state. Technically, I'd call this an illegal choice because you must have some sort of state to even be executing code. Unless you simply don't write the code, in which case your job is easy!
B) You can choose to be immediately available and eventually consistent. You calculate the response quickly with the assumptions you have most available (local memory, disk), all while transmitting events and waiting for the further-away less-available state to become available.
This is the way many online games that need quick feedback to be fun are done. [1] Unfortunately, this is also the source of the lag jumps that you see. You're always running with [partition-size in ms] outdated global state, so the assumptions you made when calculating outcomes are going to be incorrect. This is why your headshot might turn into a total miss when the player jumps five feet to his right and, oh yeah, you also died.
3) Don't react to events until the global state has been updated.
This means a full round-trip plus processing remotely and locally before that click event performs the action it is supposed to. This can be anything from a crappy experience (I shot into the ground, why should I wait), all the way to the only sensible choice (if integrity is highly important, say in transactions and avoiding double-spend).
Really, it's so much more than this too. On top of availability vs consistency you have to account for some trust model (the game client says it was a headshot, but how do I know I can trust the client) and information security (confidentiality, availability, integrity).
So TL;DR there are lots of very hard problems in distributed systems and sometimes people just default to one stance or the other to balance their cognitive load or for any number of reasons (ranging from legit to ridiculous). Sometimes they default to consistency. That's probably the case for your button-click example.
I've never connected this to the CAP theorem, it's a good way of looking at it.
I see it as 2 different questions, "Did the computer hear me" and "Does the computer have a response for me.". Most people only make an effort to answer the latter for the user, it indirectly answers the first question anyways. But you can easily answer the first by quickly doing some sort of update, such as a progress indicator, animating a button staying depressed, etc. You don't have to mess with your real model until you get a response (or an error), but then you also don't leave the user confused for 12 seconds while your app loads search results or whatever.
Obviously it doesn't work in every situation. In most video games you want your actions to affect the gameworld immediately, even if the server doesn't know about it yet. However, for most applications just adding fast indicators that the client is aware of your actions (and staying off the UI thread) will make it feel more responsive.
Consumer Reports has found that the functionality of the in-car entertainment system is the #1 indicator of customer satisfaction. You might not return the car, but you can bad mouth the system to anyone who will listen and purchase a different brand car next time.
Someone may choose to not buy a car after test driving it and experiencing the laggy touchscreen...or after reading reviews by people who did already buy the car.
Lack of performance is an attribute that would contribute to a potential consumer's attitude towards the whole car. It may not have as huge of an impact as it would for Amazon, but saying it has no effect is definitely wrong.
Though they might not nessessarily be able to use it in marketing (most customers might think that UI should always be fast) and not charge a premium for "fast" UI, they might lose potential customers due to bad reputation.
Some of the delay is just plain silly and avoidable, like having long and synchronous opening animations in response to an action, which only serve to waste the user’s time. (Oh how I love being on a web site like AT&T and watching their JavaScript poorly zoom open a blank box from the center of the page for 2 whole seconds, when I KNOW they could just show me the damned page already.)
In other cases, the source of the slowdown is less clear. Is a physical device just not delivering its signals any sooner?
I’ve played games where you have to walk to a very precise spot, hit a button, and wait literally one whole second before ANY response is visible onscreen or in audio. (And if it turns out you didn’t really take the action you thought you did, you have to walk in circles to try a slightly different spot, and wait again). Why should that ever be the case? How can a super-fast console not immediately display something or play some sound to show that you took the action?