Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's basically just Windows: Back when the current Windows architecture was designed (OS/2 and Windows NT going forward--not Win9x) the primary purpose of any given PC was to run one application at a time. Sure, you could switch applications and that was well accounted for but the entire concept was that one application would always be in focus and pretty much everything related to process/memory/file system standpoint is based around this assumption.

Even for servers the concept was and is still just one (Windows) server per function. If you were running MSSQL on a Domain Controller this was considered bad form/you're doing something wrong.

The "big change" with the switch to the NT kernel in Windows 2000 was "proper" multi-user permissions/access controls but again, the assumption was that only one user would be using the PC at a time. Even if it was a server! Windows Terminal Server was special in a number of ways that I won't get into here but know that a lot of problems folks had with that product (and one of many reasons why it was never widely adopted) were due to the fact that it was basically just a hack on top of an architecture that wasn't made for that sort of thing.

Also, back then PC applications didn't have too many files and they tended to be much bigger than their Unix counterparts. Based on this assumption they built in hooks into the kernel that allow 3rd party applications to scan every file on use/close. This in itself was a hack of sorts to work around the problem of viruses which really only exist because Windows makes all files executable by default. Unfortunately by the time Microsoft realized their mistake it was too late to change it and would break (fundamental) backwards compatibility.

All this and more is the primary reason why file system and forking/new process performance is so bad on Windows. Everything that supposedly mitigates these problems (keeping one process open/using threads instead of forking, using OS copy utilities instead of copying files via your code, etc) are really just hacks to work around what is fundamentally a legacy/out-of-date OS architecture.

Don't get me wrong: Microsoft has kept the OS basically the same for nearly 30 years because it's super convenient for end users. It probably was a good business decision but I think we can all agree at this point that it has long since fallen behind the times when it comes to technical capabilities. Everything we do to make our apps work better on Windows these days are basically just workarounds and hacks and there doesn't appear to be anything coming down the pipe to change this.

My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.



> Also, back then PC applications didn't have too many files and they tended to be much bigger than their Unix counterparts.

Okay, let me interrupt you right here. To this very day Linux has a default maximum number of file descriptors per process as 1024. And select(3), in fact, can't be persuaded to use FDs larger than 1023 without recompiling libc.

Now let's look at Windows XP Home Edition -- you can write a loop of "for (int i = 0; i < 1000000; i++) { char tmp[100]; sprintf(tmp, "%d", i); CreateFile(tmp, GENERIC_ALL, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); }" and it will dutifully open a million of file handles in a single process (although it'll take quite some time) with no complaints at all. Also, on Windows, select(3) takes an arbitrary number of socket handles.

I dunno, but it looks to me like Windows was actually designed to handle applications that would work with lots of files simultaneously.

> fundamentally a legacy/out-of-date OS architecture

You probably wanted to write "badly designed OS architecture", because Linux (if you count it as continuation of UNIX) is actually an older OS architecture than Windows.


1024 is a soft limit you can change through ulimit.

The actual limit can be seen via 'sysctl fs.file-max'. On my stock install it's 13160005.


> I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.

I think one way they could pull it off is to do a WSL2 with Windows - run the NT kernel as a VM on the new OS.

As for the price, I think they're already heading there. They already officially consider Windows to be a service - I'm guessing they're just not finished getting everyone properly addicted to the cloud. If they turn Windows into SaaS execution platform, they may just as well start giving it away for free.


>My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.

https://en.wikipedia.org/wiki/Midori_%28operating_system%29


>My guess is that Microsoft has a secret new OS (written from scratch) that's super modern and efficient and they're just waiting for the market opportunity to finally ditch Windows and bring out that new thing. I doubt it'll ever happen though because for "new" stuff (where you have to write all your stuff from scratch all over again) everyone expects the OS to be free.

More and more stuff gets offloaded onto the WSL for stuff which doesn't need interactive graphics or interoperability through the traditional windows IPC mechanisms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: