Hacker News new | past | comments | ask | show | jobs | submit login

This is one of the things I still miss about BeOS. On late 90s hardware, the OS wasn't quite hard real-time like QNX but the worst-latency latency was much better than Windows 10 or MacOS on the latest hardware, even in the presence of heavy resource contention — I remember simultaneously surfing the web, downloading video from a camcorder with a Firewire interface which did not buffer, and compiling Mozilla. The UI latency didn't change at all, and the DV transfer didn't drop a packet — everything else got throttled back as necessary, of course, but that meant that, say, GCC took longer to run rather than impacting what I was doing in the foreground.



> I remember simultaneously surfing the web, downloading video from a camcorder with a Firewire interface which did not buffer, and compiling Mozilla

Did you forget to run the OpenGL teapot?

BeOS was magical. I wish I could use Haiku in my desktop today.


It wasn't magic, it just starved everything else whenever you waved the mouse around. In this respect it was no different from Mac System 7, even though the design wasn't the same.


That's not true, classic Mac OS used a cooperative multitasking model, where processes had to be friendly enough to yield control back to the system / other processes to allow them to do work. A poorly programmed or greedy process could easily hog the CPU.


System 7 checked for mouse motion every fraction of a second and if the mouse was moving it would drop everything and draw. It didn't need apps to yield for this to happen.


Are you comparing that shit when you pulled down a menu in System 7 could bring a whole network down, with Be? Such as a good joke...


The CPU and the whole net by holding a menu item down.


That was arguably an application specific issue, though the OS was painful to code for. I actually coded some classic Mac OS networking code and it was insanely easy to mess things up, but eventually it ended up being rock solid. The trick was all level interrupts where supposed to only do almost nothing. You couldn’t allocate memory for example which was a major hint of just how minimal those functions needed to be.

Worse with cooperative multitasking any running application could mess things up.


Then do not talk about Classic Mac as something magical because compared to Be they were utterly ridiculous.

For its era, System 7 was good, but later, it was worse than even Windows 98.

Be tried to do things right, it was ~1995 after all.


I see where java gui coder took their inspiration from /s


And that's exactly how a desktop operation system should behave. The most important priority of a user-facing OS should be the user after all.


I remember that about Amiga. Everything was just smooth. I still have fond memories of Cygnus editor being super-responsive to anything I could throw at it, even with SAS C running in the background.

I was taken aback by how slow Windows 95 felt after I finally jumped the wagon and retired my beloved Amiga in favor of a PC, circa 1995. Just moving the cursor was incredibly jerky in comparison and things would randomly freeze for seconds for no apparent reason. Windows NT 4 was much smoother though.

Even today, in Visual Studio, I regularly have multi-second pauses between a key press and the character showing-up on the screen, though this probably has more to do with Resharper than either Visual Studio or Windows.


Is it possible to do this with Linux? Giving the UI very high priority etc.


Yes, if you switch your kernel configuration to preempt. Have a look at https://liquorix.net/

On Arch Linux this is available as linux-zen

There is also realtime-linux - https://wiki.archlinux.org/index.php/Realtime_kernel_patchse...


what are the tradeoffs here? I assume there is a reason why it isn't default or even in the main repo?


> Have a look at https://liquorix.net/

This looks super fun to play with, thanks!


My 5 year old uses our first iPad which is over 10 years old now. The keyboard UI from then is still faster than my iPad pro keyboard UI because Steve Jobs valued the priority of that in the stack :)


I think Steve Jobs have the same latency sensitive condition as some of us do. Getting irritated when things aren't the speed they are suppose to be.


I have an old Panasonic “smart” TV with a daft animated but glacially slow UI.

You can tell the execs that approved that were probably impressed watching someone else demo it but probably never used it themselves.

Jobs would have spotted it was rubbish immediately.


OK so this is a partial tangent - but I have a washing machine that annoys the hell out of me for this exact same reason (I know washing machine UI is not a hot topic usually but it damn well should be!)

It has a dial on the front to choose settings. Potentially a great idea to skip quick to the one I want - BUT the damn thing can only detect 1 step of change about every half second. So if you fast-turn it 4 clicks, it still only moves one step. So you have to stand there, slowly turning it, click, click, click, click....

The dial is the biggest design feature on there. Massive. Right in the middle. Bright lights all around it. But they couldn't even be bothered to make it solve the one problem it was there to do.


I'm getting angry even reading about it.

It's the kind of thing that'd make you reject purchasing it if only you'd thought to have tested that specific bit of it before buying it.


I knew hacker news was the right place to share this story. Only here people would understand the pain I am in!


I am almost a day late... But yes.

Honestly, I wish that they would make appliances with physical controls as opposed to digital. At least those are easier to fix/mod on your own.


I always think that about modern SLR cameras. They took a series of brilliant, instantly accessible physical dials (around the lens for aperture, on the top for exposure) and replaced them with menus and buttons on a tiny screen. WHY? How is that progress?

I think if someone did a kickstarter for a very simple physical buttoned digital camera it would do very well.


Or there was even a demo unit in the first place...


Open it up, take the PCB out, figure out how the dial sensor works (the dial is probably just repeatedly nudging a contact out of the way), and thennn... program an Arduino to be a giant input buffer (that slowly replays), and wire the Arduino between the dial and the rest of the machine. :D

Caveat: said Arduino may need to sit between the rest of the buttons as well, in case inputs must be given in sequence.

Possible benefit: switch the Arduino for something from the ESP family, and you could feed instructions to the machine over Wi-Fi or Bluetooth. (Which could take the form of a dedicated wireless button panel above the machine with macros for frequently-used options.)

(Hmm. I can tell there's some over-optimization in here, but I'm not sure _where_.)


I’d be pretty surprised, because I feel like the UI in OS X has always had more latency than Windows. Especially noticeable with the mouse with acceleration, but also throughout the rest of the UI. This was my biggest gripe in switching to OS X.


Yeah. Mouse was always way better in Windows and Linux. Finder browsing files has also always been laggy. Now with the latest macos versions it's ridiculous, if you take a screenshot, it takes about a minute for the file to appear.


Screen shots appear instantly in macOS, I'm not sure what machine you're on.

Finder browsing files is much faster in macOS, Windows scans the entire directory with defender every time you click on it before it allows you to sort it or interact with it. That can take several minutes on some of my directories.

The mouse tracking model in Windows is somewhat different, and a bit more responsive, but there are multiple third party extensions to macOS the make its model exactly the same as the Windows model.


What the heck are you taking about lol


They also ran surprisingly long, particularly the die-shrunk iPad 2: https://i.imgur.com/ELyFwFu.png


Indeed, I remember being particularly impressed by iPad battery life in the early days. I would recharge maybe once a week of light usage. Now I’ve got to plug in nearly every single night.


I remember that too.

But I also remember not having background processes. It's a trade of.


I charge my close to 5 years old Pixel C tablet about once a week. Light usage, mostly some streaming up to an hour per day. Not sure if your light usage means something else though.


It’s been a while since I tested it (pre-Wayland). The main problems I saw back then was that the window managers tended to do lots of file I/O for things like moving windows or opening menus so it was critical to tune the I/O scheduler (and NFS home directories were game over), and since rendering was up to the app it was common to have, say, the chrome respond relatively well but everything inside the window would be chunky on the main thread.

I’d hope this is much better now given things like Wayland and the increased use of parallelism.


Linux used to do that out of the box. But there was a series of "quasi-real time has too much maintenance costs", "quasi-real time isn't even real enough for the real gains", and "our computers are fast now, we don't have to trade so much throughput for latency" decisions that were all very sensible but let us here.



That's not up to kernel. Most of the slug is from the UI itself doing I/O (UI there is a wooly stuff almost nobody understands completely anyways).


> and the DV transfer didn't drop a packet

Considering how FireWire worked in comparison to USB that's not particularly surprising. The host machine was more or less out of the way while the FW interfaces did their thing instead of requiring constant hand-holding.


Kind of: it used DMA, but the buffer size wasn’t infinite. At the time, Windows 9x and Mac OS would lose data if you ran anything demanding. Windows NT and Linux might, depending on drivers and the exact nature of the other process’ activity.


> but the worst-latency latency was much better than Windows 10 or MacOS on the latest hardware

Citation needed. Do you have any numbers on that?

I'm skeptical that BeOS has better latency than modern OSes on modern hardware.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: