That's an assumption in the exact opposite direction. GP is assuming that if someone commits while employed by a company then that company paid completely for that commit, while you're assuming that in that case the company "probably" didn't pay for the whole commit.
Either way, the article's conclusion seems to be insufficiently supported.
I actually specifically didn’t state it one way or the other, just highlighting the potential for undercounting. I think there’s some potential for overcounting if you assume employer paid for commits, but less so if you constrain to those employed to work on the Linux kernel - I don’t know if many people would be working “spare time” on the same thing they’re getting paid for.
A big part of it is that NT has to check with the security manager service every time it does a file operation.
The original WSL for instance was a very NT answer to the problem of Linux compatibility: NT already had a personality that looked like Windows 95, just make one that looks like Linux. It worked great with the exception of the slow file operations which I think was seen as a crisis over Redmond because many software developers couldn’t or wouldn’t use WSL because of the slow file operations affecting many build systems. So we got the rather ugly WSL2 which uses a real Linux filesystem so the files perform like files on Linux.
I don't know about ugly. Virtualization seems like a more elegant solution to the problem, as I see it. Though it also makes WSL pointless; I don't get why people use it instead of just using Hyper-V.
Honestly, just cause it's easier if you've never done any kind of container or virtual os stuff before. It comes out of the box with windows, it's like a 3 click install and it usually "just works". Most people just want to run Linux things and don't care too much about the rest of the process
That has not been my experience at all; I get pretty much the same times on the same machine on Linux and Windows. Something weird has happening to that person. Someone mentioned Defender, and that could certainly be it, as I have it totally disabled (forcibly, by deleting the executable).
>Why should we be expected to review random papers on arxiv too...?
The GP is not saying to review each paper you read or cite. They're complaining that a colleague accepted a claim after just reading the title and where the paper was published. Between that and doing a full review there's surely a world of options.
> Daube is a slang word for something of low quality.
Which is fun because it's also a really delicious dish from Provence (south of France) made with beef that has been marinated for multiple hours in red wine.
Thanks for making me feel old. I remember reading slashdot a lot and also freshmeat.net to find new interesting software. I don't think I like the modern software experience, by comparison. It's all shoddy rehashing of the client/server model, where the client is crap and slow, and so is the server.
IIRC with Windows 98 you could just use any product key you had on as many machines as you wanted since there was no activation or real phoning home capabilities. So most likely your whole friend group would be using the same serial that was copied off your uncle's old gateway.
It was "Outhouse Express" and "GruntPage" for me in the late 90s. I still use these for software I find particularly irksome, for example Conscrewence from AtlASSian.
But then you're putting data that used to be on RAM on storage, in order to keep copies of stored data on RAM. Without any advance knowledge of access patterns, it doesn't seem like it buys you anything.
Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.
The mentioned situation is not running out of memory, but being able to use memory more efficiently.
Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).
If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.
Windows doesn't do that, though. If a process starts thrashing the performance goes to shit, but you can still operate the machine to kill it manually. Linux though? Utterly impossible. Usually even the desktop environment dies and I'm left with a blinking cursor.
What good is it to get marginally better performance under low memory pressure at the cost of having to reboot the machine under extremely high memory pressure?
In my experience the situations where you run into thrashing are rather rare nowadays. I personally wouldn't give up a good optimization for the rare worst case. (There's probably some knobs to turn as well, but I haven't had the need to figure that out.)
I believe that it's not very hard to intentionally get into that situation, but... if you notice it doesn't work, won't you just not? (It's not that this will work without swap after all, just OOM-kill without thrashing-pain.)
I don't intentionally configure crash-prone VMs. I have multiple concerns to juggle and can't always predict with certainty the best memory configuration. My point is that Linux should be able to deal with this situation without shitting the bed. It sucks to have some unsaved work in one window while another has decided that now would be a good time to turn the computer unusable. Like I said before, trading instability for marginal performance gains is foolish.
You're proposing that every porn site on the planet pings a user's government's API to see if they're adult or not? In other words, that any random site is able to contact hundreds of APIs.
It doesn't sound simple. Now there needs to be some kind of pipeline that can route a new kind of information from the OS (perhaps from a physical device) to the process, through the network, to the remote process. Every part of the system needs to be updated in order to support this new functionality.
It's not simple, but it's also not new. mTLS has allowed for mutual authentication on the web for years. If a central authority was signing keys for adults, none of the protocol that we currently use would need to change (although servers would need to be configured to check signatures)
and is it easier to implement id checks for each online account that people have, had, and will ever have in the future?
parents need to start parenting by taking responsibility on what their kids are doing, and government should start governing with regulations on ad tech, addictive social media platforms, instead of using easily hackable platforms for de anonymization, which in turn enable mass identity theft.
Either way, the article's conclusion seems to be insufficiently supported.
reply