Agreed about the signal stalk. Correct me if I'm wrong, but the last I've seen the signal stalk is back in Teslas, at least in Model Y, but I believe in newer Model 3s as well.
But "gear shifter"? It doesn't really require a stalk.
When you start, you select either D or R. Not a big deal.
And when you stop, I think most (all?) EVs even automatically go to P and apply parking brakes when you stop.
I think that already covers 99% of driving for 99% of people.
Tesla is the one that started this and cost who knows how many lives. They are the ones who thought it would be great to re-invent the wheel and then when the moron execs at the other companies saw the hype they had to copy it.
The execs probably got promotions for it considering the money saved. The real morons are the customers buying these cars. Brand loyalists don't buy based on logical reasoning.
> that tends to get in the way of complex error handling.
Agree. In Java, Streams allow you to process collections in a functional style. This feature enables concise, expressive data manipulation with operations like map, filter, and reduce.
Some people point out that Java's checked exceptions spoil the simplicity and elegance of Streams by forcing you to handle exceptions.
But that's not a reason to not have checked exceptions, it is a reason to not do functional style composition when methods can throw exceptions. Streams was invented for collections, which tend not to throw exceptions. If proper error handling is important don't do Streams.
The Java streams are cool and I like them, but they're not a replacement for a functional type system or a functional language.
`map` is a lot more than a fancy for-loop for lists and arrays; it's about abstracting away the entire idea of context. Java streams aren't a substitute for what you have in Haskell.
How is this different from using a small aperture size?
When you reduce aperture size the depth of field increases. So for example when you use f/16 pretty much everything from a few feet to infinity is in focus.
Is that actually true? I do astrophotography through an f/10 telescope and its focus is very sensitive. I use a focuser that moves the camera 0.04 microns per step.
Not doubting you, just asking to understand. Astrophotography doesn't always behave the same as terrestrial photography
in addition to aperture, percieved depth of field greatly depends on:
- focal length (wider is deeper)
- crop factor (higher is deeper)
- subject distance (farther is deeper)
compared to your telescope, any terrestrial photography is likely at the opposite extremes, and at a disadvantage everywhere but subject distance.
but, focus is most mechanically sensitive near infinity. adjustment creates an asymptotically larger change in the focal plane as infinity is approached.
in a point-and-shoot camera with a wide lens at f16, "infinity" basically means across the street.
I watched the video on the home page and thought it is weird that they spend an inordinate amount of time on frame rate. Who picks an editor based on frame rate?
If you want to talk about perf in the context of a text editor show me how big of a file you can load--especially if the file has no line breaks. Emacs has trouble here. If you load a minified js file it slows to a crawl especially if syntax highlighting is on. Also show me how fast the start up time is. This is another area where Emacs does not do well.
So Zed is available on Windows--but only if you have a x64 processor. Lots of people run Windows on Arm64 and I don't see any mention of Arm64. This is where the puck is heading.
It's not just frame rate, but also input delay. If you're using Visual Studio Code, you might be used to waiting 100 ms for a character you typed to appear. My personal workflow is based on Kitty and Neovim, which I've configured so that it can launch within 20 ms. Working without any input delay allows me to explore and edit projects at typing speed. As such, even tiny delays really bother me and make me lose my flow. I would believe Zed's focus on performance is motivated similarly.
Also, I do not believe Windows on Arm64 is a very large demographic? Especially for developers, unless they're specifically into that platform.
The only IDE I have used where frame rate is noticeable was Visual Studio (not Code).
Once you are beyond a bare minimum, every other speed metric is more important. Zed does really well on many of those, but some depend on the LSP, so they become the bottleneck quickly.
Yeah. The Steam survey isn't a perfect sample since it's skewed towards gamers, but that currently shows just 0.08% of Windows users are on ARM, while 81.5% of Mac users are on ARM.
That may be true if you're looking at all windows computers in existence. If you look at new laptops being sold you see different numbers. As of 2025, Arm processors hold about 13% to 20% of the market share for new Windows laptops. This is important because these are the people who are more likely to download and install your software.
You literally can’t tell the difference in a 20ms delay. That is an order of magnitude lower than the neural feedback loop latency. You may think that you can, but studies don’t back this up.
> At the most sensitive, our findings reveal that some perceive delays below 40 ms. However, the median threshold suggests that motorvisual delays are more likely than not to go undetected below 51-90 ms.
By this study's numbers, 20ms is somewhat below the lower limit of ~40ms, but not too far below. 100ms would be easily perceivable - though, based on the other replies, it seems that VS Code does not actually have that much latency.
Don't confuse this with human reaction time, which is indeed an order of magnitude higher, at over 200ms. For one thing, reaction time is based on unpredictable events, whereas the appearance of keystrokes is highly predictable. It's based on the user's own keypresses, which a touch typer will usually have subconsciously planned (via muscle memory) several characters in advance. So the user will also be subconsciously predicting when the text will appear, and can notice if the timing is off. Also, even when it comes to unpredictable events, humans can discern, after the fact, the time difference between two previous sensory inputs (e.g. between feeling a keyboard key press down and seeing the character on screen), for much shorter time differences than the reaction time.
Of course, just because these levels of latency are perceptible doesn't mean they're a material obstacle to getting work done. As a relatively latency-sensitive person, I'm not sure whether they're a material obstacle. I just think they're annoying. Higher levels of latency (in the hundreds of ms) can definitely get in the way though, especially when the latency is variable (like SSH over cellular connections).
You're the one who said "you literally can't tell the difference". I agree to a point. It seems plausible that they were experiencing some other effect such as hitching or slowdown, rather than just a constant 100ms delay (which again isn't supposed to happen).
On the other hand, I just thought of one way that even a small fixed amount of latency can be a material obstacle. Personally, I type fast but make lots of typos, and I don't use autocorrect. So I need to react to incorrect text appearing on screen, backspace, and retype. The slower I react, the more text I have to delete (which means not just more keypresses but also more mental overhead figuring out what I need to retype). For this purpose, I am bound by the human reaction time, but editor latency is added on top of that. The sooner the text appears, the sooner my 'reaction timer' can start, all the way down to 0 latency. [Edit: And 100ms of latency can make a meaningful difference here. I just did a quick typing speed test and measured 148 WPM which is around 12 characters per second, so 100ms is one extra character, or a bit more.]
Also, latency might affect productivity just by being annoying and distracting. YMMV on whether this is a legitimate complaint or whether you should just get used to it. But personally I'm glad I don't have to get used to it, and can instead just use editors with low latency.
"Order of magnitude", so you're saying the neural feedback loop latency is >100ms? That seems obviously wrong.
Also you can absolutely feel the visual difference between 60Hz (~16ms) and 120Hz (~8ms), and for audio it's even more nuanced.
Just because studies don't back this up yet doesn't make it false. I imagine this is really hard to measure accurately, and focusing only on neuron activity seems misguided too. Our bodies are more than just brains.
> "Order of magnitude", so you're saying the neural feedback loop latency is >100ms? That seems obviously wrong.
Human neural feedback loop latency is a range that varies widely depending on the type of loop involved. Reflex loops are fastest, operating in tens of milliseconds, while complex loops involving conscious thought can take hundreds of milliseconds.
Short-latency reflex: 20-30ms. Signal travels through spinal cord, bypassing the brain. E.g. knee-jerk reflex.
Long-latency reflex: 50-100ms. Signal travels to the brainstem and cortex for processing before returning. E.g. Adjusting grip strength when an object begins to slip from your hand.
Simple sensorimotor reaction: 230 - 330ms. Simple stimulus-response pathway involving conscious processing, but minimal decision-making. E.g. pressing a button as soon as light turns on.
Visuomotor control: ~150ms, adaptable with training. Complex, conscious loops involving vision, processing in the cortex, and motor commands. E.g. steering a bike to stay on a path in a video game.
Complex cognitive loops: Brain's processing speed for conscious thought is estimated at 10 bits per second, much slower than the speed of sensory data. High-level thought, decision-making, internal mental feedback. E.g. complex tasks like analyzing a chess board or making a strategic decision.
A few years ago I did some testing with a quick Arduino-based setup I cobbled together and got some interesting results.
The first test was the simple one-light-one-button test. I found that I had reaction time somewhere in the 220-270ms range. Pretty much what you'd expect.
The second test was a sound reaction test: it makes a noise, and I press the button. I don't remember the exact times, but my reaction times for audio were comfortably under 200ms. I was surprised at how much faster I was responding to sound compared to sight.
The last test was two lights, and two buttons. When the left light came on I press the left button; right light, right button. My reaction times were awful and I was super inaccurate, frequently pressing the wrong button. Again, I don't remember the times (I think near 400ms), but I was shocked at how much just adding a simple decision slowed me down.
High frame rates (low frame times, really) are essential to responsiveness which, for those who appreciate it, is going to make much more of a difference day to day than the odd hiccup opening a large file (not that zed does have that issue, I wouldn't know as I haven't tried opening something huge).
You probably do. Many people just never notice that. It's not about typing or reading fast either, it's just about how it feels. Typing into something with shitty latency feels like dragging my fingernails across a chalkboard.
It's the same with high dpi monitors. Some people (me included) are driven absolutely insane by the font rendering on low density monitors, and other people don't even notice a difference.
Honestly, consider yourself blessed. One less thing in the world to annoy you.
Yes, I can perceive that latency, if I am actively looking for it. No, it has absolutely no effect whatsoever on my ability to work. The latency is far, far below what could possibly affect neural feedback loops, even on the slowest editors. And it doesn’t bother me in the slightest.
Low-dpi font rendering also isn’t an issue for me, unless it is so bad as to be illegible (which no modern system is).
Me! Frame rate and input latency are very important for a tool I use for hours every day. Obviously that's not the only feature I look for in an editor but if an editor _doesn't_ have it, I skip it. I also try to work on devices with 120Hz displays and above these days.
This always makes me laugh. The editor was barely announced two years ago. They've built it from the ground up with native support now for three different operating systems. They're experimenting with some cool new features, and even though I don't care about it I've heard their AI integration is pretty damn good.
But waaaaah they don't support a processor that accounts for probably less then 10% of Windows Machines
Ubiquity is pretty important when you're going to invest in learning a new editor. This is one of the advantages of vim for example. It is available everywhere... linux, windows, terminal, gui, etc.