I drank the Erlang Kool-Aid around the same time this was published. In 2013 I worked for a company that had a few Erlang services (as well as some JVM services, a mix of Scala and to a lesser degree Java).
One thing I was tasked with was replacing the ingress data collector. One of the limitations of Erlang at the time was that all SSL termination was funneled through a single core. Once the Java replacement was deployed, we saw a massive decrease in latency, the p95s and p99s especially, and all the weird operational overhead of trying to understand what the Erlang VM was doing at any given moment.
Say what you will about Java and the JVM, but it's a fantastic platform for reasonably high performance servers. Erlang might have a lot of claims for high concurrency and scalability, but practically I've had considerably more success with the JVM.
I haven't touched Erlang since 2013, so in the intervening 11 years I can only hope that it has gotten better. Though I have zero interest in trying it again.
> 100% they would pay a lot of money to be able to hang out with Joe Rogan, or some only fans person, and those pornstars or podcasts hosts will never disagree with them, never get mad at them, never get bored of them, never thing they're a loser, etc.
This is the stuff of Brave New World. It's happening to us in real time.
I inherited a go project that has two different commands under `cmd`, and it seems when I run this against one of those `main`s, it incorrectly detects what it thinks as dead code that is used in the other command.
YouTube TV injects ads on certain content. I know it's there for on-demand TV shows, and some live sports. It's literally overlaid on top of the channel's ads in the live example.
I think years ago when I first subscribed one of the major benefits was the ability to skip through these ads just like DVR, but you can no longer do that.
Cable TV broadcasts include both the network's ads and slots for the carrier to run their own ads. They're not inserting bonus ads on top of the actual content. Again, this is exactly the same as any cable provider has always worked.
We don't have a choice. That's how cable works, and YouTube TV exists for those of us who need it. They can't magically create an ad free broadcast of TNT or something. How would that even work?
You don't have to like broccoli but it'd be weird to complain that it doesn't taste like chocolate.
> The name "Slashdot" came from a somewhat "obnoxious parody of a URL" – when Malda registered the domain, he desired to make a name that was "silly and unpronounceable" – try pronouncing out, 'h-t-t-p-colon-slash-slash-slashdot-dot-org'".
I really do mean the root directory! As in, a kinda joke that meant the site was a central place for news, so it would act as a root directory for that
It’s virtually impossible to do free threading safely, especially with large codebases developed by multiple people. This includes tiny Python scripts that pull in a bunch of dependencies.
It’s like saying that C is a safe language, just “get good” at it.
There are safe alternatives such as structured concurrency.
Rust is a brilliant lesson in using traditional threading safely. It uses & for thread-shared types and constrains &mut to a single thread, which naturally causes people to keep single-threaded data on an object only accessible from a single thread, and make multithreaded data either immutable, mutex-protected, or atomic.
Alternatively, message-passing isn't traditional threading, but Erlang/Go-style languages are another way to approach concurrency or parallelism.
In my experience with multi-threaded programming, C++ code with "careless sharing issues" is often filled with multiple threads accessing the same object and relying on convention to avoid calling the wrong thread's methods, pervasive data races and unsynchronized variable access, mistaken use of mutexes on only one side of shared memory, and logical race conditions that require adding mutexes (risking deadlock) or rewriting code to address. Whereas Rust code tends to not have these issues to begin with (outside of the implementation of synchronization primitives), store reader and writer methods on separate handle objects, use Arc to manage cross-thread shared memory, and similar which makes the code either correct or tractable to learn and make correct.
I also struggle to understand the threading model of COM and C libraries like libusb (https://libusb.sourceforge.io/api-1.0/libusb_mtasync.html), though that might just be me, and each library tends to have a different threading model. Rust's Send/Sync is a 90% solution which you can learn upfront, is checked by the compiler, and applies to all libraries and works for most use cases.
> In my experience with multi-threaded programming, C++ code with "careless sharing issues" is often filled with multiple threads accessing the same object and relying on convention to avoid calling the wrong thread's methods
Right, I can see the kind of codebase you're referring to.
I don't see Rust as a magical weapon solving concurrency issues though. Namely because Rust (the compiler) has a very limited view of what happens in the lifetime of a multithreaded system, and no view at all of the lifetime of a multiprocess system.
Even when writing purely single threaded Rust, you quickly end up having to let go of the strictly static memory sharing checks and switch to dynamic ones.
I have yet to find a use case where Rust solves anything but the most blatant synchronization issues.
I'm not sure how far the author has gone, but they should check out Gorilla compression[1] (just the compression part, not the whole database). It works well for time-series data, and might be suitable here? Basically if your numbers don't deviate massively--think of a CPU metric that stays in the same place throughout the day, inside the bounds of 0-100--the compression is really effective.
Clickhouse supports Gorilla and some others[2] that might also be of use.
Gorilla is XOR compression which is better for timeseries where the metrics change smoothly from one to the next point, because it just XOR checks against the previous value.
Floats should really not be thought of as byte streams, instead they are 3 bit fields in a single word. Sign, mantissa, exponent split up into 3 streams compresses way better than them all together. At that point you are just dealing with "how to compress integers" which is much simpler problem.
I played with zstd and it compresses way better if you take 8 float64 and shuffle bits side ways. This is a trick that blosc popularized [1].
Adding a shuffle filter ahead of the zlib or zstd worked way better for reducing the size of the data when dealing with float streams. This does group the bits in a similar fashion to splitting up the floats into components, but is much simpler on the decode path with SIMD.