Use our new open source (modification and redistribution not permitted) app to exchange end-to-end encrypted (from your client to our server) messages with your friends! Having all your data on our service protects your data sovereignty (we do not provide for export or interop) by guaranteeing that you always have access to your full history! Usage also protects your privacy (we analyze your data for marketing purposes) by preventing unscrupulous third parties from analyzing your data for marketing purposes.
If we had competent regulators this sort of blatant willful negligence would constitute false advertising.
FreeBSD is not "compatible with Linux", it provides a way to run Linux applications under a Linux-like syscall environment. What you're suggesting is as if you could load Linux kernel modules into the FreeBSD kernel.
The issue with NT is the driver ecosystem. You'd have to reimplement a lot of under-documented NT behavior for NT drivers to behave themselves, and making that work within the Linux kernel would require major architectural changes. The userland is also where a lot of magic happens for application compatibility.
Intellectual property as it exists and is used today overwhelmingly is used to stifle competition and lock down monopolies. It's used to project power internationally by deputizing foreign countries to protect American business interests. It's a far cry from how it's popularly presented as a way for the "little guy" to protect their inventions.
GHA is based on Azure Actions. This is evident in how bad its security stance is, since Azure Actions was designed to be used in a more closed/controlled environment.
Absolutely, was not trying to claim otherwise. But since we're engineers (at least I like to see myself as one), it's worth always keeping in mind that almost everything comes with tradeoffs, even traits :)
Someone down the line might be wondering why suddenly their Rust builds take 4x the time after merging something, and just maybe remembering this offhand comment will make them find the issue faster :)
It's never the case that only one thing is important.
In the extreme, you surely wouldn't accept a 1 day or even 1 week build time for example? It seems like that could be possible and not hypothetical for a 1 week build since a system could fuzz over candidate compilation, and run load tests and do PGO and deliver something better. But even if runtime performance was so important that you had such a system, it's obvious you wouldn't ever have developer cycles that take a week to compile.
Build time also even does matter for release: if you have a critical bug in production and need to ship the fix, a 1 hour build time can still lose you a lot here. Release build time doesn't matter until it does.
A lot of C++ devs advocate for simple replacements for the STL that do not rely too much on zero-cost abstractions. That way you can have small binaries, fast compiles, and make a fast-debug kinda build where you only turn on a few optimizations.
That way you can get most of the speed of the Release version, with a fairly good chance of getting usable debug info.
A huge issue with C++ debug builds is the resulting executables are unusably slow, because the zero-cost abstractions are not zero cost in debug builds.
Its not just the compiler - MSVC like all others has a tendency to mangle code in release builds to such an extent that the debug info is next to useless (which to be fair is what I asked it to do, not that I fault it).
Now to hate a bit on MSVC - its Edit & Continue functionality makes debug builds unbearably slow, but at least it doesn't work, so my first thing is to turn that thing off.
You can debug release builds with gcc/clang just fine. They don't generate debug information by default, but you can always request it ("-O3 -g" is a perfectly fine combination of flags).
I think this also massively depends on your domain, familiarity with the code base and style of programming.
I've changed my approach significantly over time on how I debug (probably in part due to Rusts slower compile times), and usually get away with 2-3 compiles to fix a bug, but spend more time reasoning about the code.
Folks have worked tirelessly to improve the speed of the Rust compiler, and it's gotten significantly faster over time. However, there are also language-level reasons why it can take longer to compile than other languages, though the initial guess of "because of the safety checks" is not one of them, those are quite fast.
> How slow are we talking here?
It really depends on a large number of factors. I think saying "roughly like C++" isn't totally unfair, though again, it really depends.
My initial guess would be "because of the zero-cost abstractions", since I read "zero-cost" as "zero runtime cost" which implies shifting cost from runtime to compile time—as would happen with eg generics or any sort of global properties.
(Uh oh, there's an em-dash, I must be an AI. I don't think I am, but that's what an AI would think.)
People do have cold Rust compiles that can push up into measured in hours. Large crates often take design choices that are more compile time friendly shape.
Note that C++ also has almost as large problem with compile times with large build fanouts including on templates, and it's not always realistic for incremental builds to solve either especially time burnt on linking, e.g. I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.
> I believe Chromium development often uses a mode with .dlls dynamic linking instead of what they release which is all static linked exactly to speed up incremental development. The "fast" case is C not C++.
There's no Rust codebase that takes hours to compile cold unless 1) you're compiling a massive codebase in release mode with LTO enabled, in which case, you've asked for it, 2) you've ported Doom to the type system, or 3) you're compiling on a netbook.
I'm curious if this is tracked or observed somewhere; crater runs are a huge source of information, metrics about the compilation time of crates would be quite interesting.
I had this idea during the pandemic 5 years ago now, and even did some of that work to figure out the variables I'd need to extract to make it work, but I never found the time/motivation to work on it for real. Really happy to see someone put in the effort.
The productive capacity for the goods we consume was built by average people. Billionaires only exist to skim off the top, and are not a required component of the process.
That's a misunderstanding. Compression algorithms are typically designed with a tunable state size paramter. The issue is if you have a large transfer that might have one side crash and resume, you need to have some way to persist the state to be able to pick up where you left off.
This is what Zoom claimed was e2ee for a little while before getting in trouble for it.
reply