I don't think so. What do you need it for if I may ask? If some people actually need a patch it is much more likely to get people working on it to make it committable.
fwiw, we have a team working on OrioleDB at supabase, with the plan to get to GA later this year or early next year. we'll continue to submit patches upstream for the TAM, and of course that will take as long as it takes for the community to accept them. Our focus right now reliability and compatibility, so that the community can gain confidence in the implementation
I was wondering how far away OrioleDB was from becoming a pure extension instead of being a postgres fork. I'm not an expert by any means on TAM - but was curious if the Orioledb team managed to upstream some parts of their fork.
Most alternative PG storage engines have stumbled, and OrioleDB touches a lot of core surfaces.
The sensible order is: first make OrioleDB rock-solid and bug-free; ( https://github.com/orioledb/orioledb/issues?q=is%3Aissue%20s... ) then, using real-world feedback and perf data, refactor and carve out patches that are upstream-ready. That’s a big lift for both the OrioleDB team and core PG.
From what I understand, they’re aiming for a release candidate in the next ~6 months; meaningful upstreaming would come after that.
In other words: make it work --> validate the plan --> upstream the final patches.
I had tried chimera-linux with dinit (on a VM with GNOME desktop) it was a good experience while it lasted and loved the TL; DR that chimera writes and it was DIY distro which was quite good like arch in it's initial days.
But now I'm back on fedora for want of packages and not being on a mainstream distro is all rainbows and unicorns until we hit the corner case or a missing package which is available only as flatpak and won't integrate with the look and feel of the desktop environment.
You'll still be off the beaten path, but you could fix this particular complaint by running Chimera and then putting nix the package manager on top of it
I had been only following this language with some interest, I guess this was born in gitlab not sure if the creator(s) still work there. This is what I'd have wanted golang to be (albeit with GC when you do not have clear lifetimes).
But how would you differentiate yourself from https://gleam.run which can leverage the OTP, I'd be more interested if we can adapt Gleam to graalvm isolates so we can leverage the JVM ecosystem.
While I indeed worked for GitLab until 2021, Inko's development began before I joined GitLab, and GitLab wasn't involved in it at all.
Inko was hosted on GitLab for a while (using their FOSS sponsorship plan), but I moved it back to GitHub as to make contributing easier and to increase visibility. As much as I prefer GitLab, the sad reality is that by hosting a project there you'll make it a lot harder for people to report bugs, submit patches, or even just find out about your project in the first place. I could go on for a long time how GitLab really wasted its chances with taking over GitHub, but that's for another time :)
I guess linked lists as they are are very useful for implementing queues (particularly those that feed thread pools) where-in the costs of growable array is not needed and the cache locality does not matter (continuing with a thread pool example - It's almost a guarantee that having next element in the cache of the CPU which is not going to execute the next Runnable is a waste).
In Java particularly the both array as well as the linked implementation of blocking queues should perform equally well. FWIW most queue implementations are linked lists.
The best implementations typically have a queue per thread and work stealing. The first threads to finish their assigned work will grab items from other queues, but until then you get perfect cache locality.
Java's queues and global threadpool queues in general are pretty old hat.
> How is that possible? Assuming all things are equal AOT should always be better.
The primary thing here is that the hotter the code path the more optimized your code will be using a JIT (albeit compiled with a compiler which is slower) which is impossible with AOT (since we have a static binary compiled with -O2 or -O3 and that's it) also Java can take away the virtual dispatch if it finds a single implementation of interface or single concrete class of an abstract class which is not possible with c++ (where-in we'll always go thru the vtable which almost always resolves to a cache miss). So c++ gives you the control to choose if you want to pay the cost and if you want to pay the cost you always pay for it, but in java the runtime can be smart about it.
Essentially it boils down to runtime vs compile time optimizations - runtime definitely has a richer set of profiles & patterns to make a decision and hence can be faster by quite a bit.
> Java can take away the virtual dispatch if it finds a single implementation of interface or single concrete class of an abstract class which is not possible with c++ (where-in we'll always go thru the vtable which almost always resolves to a cache miss)
> Java can take away the virtual dispatch if it finds a single implementation of interface or single concrete class of an abstract class
It can even do that if there are multiple implementations, by adding checks to detect when it is necessary to recompile the code with looser assumptions, or by keeping around multiple versions of the compiled code.
It needs the ability to recompile code as assumptions are violated anyways, as “there’s only one implementation of this interface” can change when new JARs are loaded.
While I tend to agree to the conclusion on premature optimization - I disagree with the assumption that it is premature for most startups. In fact it's a reasonable insurance for startups (that is - if at all they succeed) it'll solve the problem of scale at that point without needing to make huge changes.
BTW Google open-sourced Kubernetes not for charity (like all businesses they also want to make money), they knew they had lost the cloud war with Amazon/Azure gulping up almost 80-90% market share. So they wanted a platform on which they can lure users back to Google Cloud when they start providing kick-ass services (to avoid the much famed vendor lock-in). And since docker was able to solve the dependency management in a reasonable way (not in a very esoteric way like nix) and were dipping their toes into distributed orchestration, they took it as a open fundamental unit to solve orchestration of distributed systems.
But yes Google succeeded in convincing dev/ops to use k8s by frightening them with vendor lock-in. But the most ironic thing that I see about k8s is that all these HPA, Restart on crash all those things are being touted as great new features. These features have existed in Erlang for decades (supervisors and spawning actors). I'm not sure why Google did not try to leverage the Erlang ecosystem - it might have been much faster to market (may be NIH).
> ... it'll solve the problem of scale at that point without needing to make huge changes.
This is incorrect. It's a common mistake to pre-optimise for a future that may never come. I call it "What-If Engineering".
You should always just build a monolith and scale it out as and when you understand what it is that needs scaling. Choosing the right languages and frameworks is where you need to put your effort. Not K8s.
What you say is true, but that is how insurance works - you pay a premium for "What if something unexpected happens", there is a 9 nines chance that it'd not happen but still we keep paying. K8s is similar.
It's because OTP does not integrate with anything not running on Erlang VM, and k8s instead derives from different family tree of general language-independent schedulers/environments.
Language independence is not a trait of k8s it's an artifact of docker packaging java/c++/perl/python/go/rust etc. as an arch dependent image. TBH I find k8s support for languages other than Golang pretty poor (there have been attempts to get java into k8s by redhat with native-image, but it seems to have not made it big).
Language independence is a trait of k8s in the sense that none of its interfaces are in any way specific to a language - the most restrictive in that are the few APIs that are based on gRPC, because state of gRPC libraries is still poor in some places.
Unless you want to embed another language in-process of some k8s components, but the need to do that is disappearing as things are moved out of process.
Hoping this release will bring the below advantages.
1. A statically compiled language with GC-d runtime, which compiles quicker than golang
2. Something that brings algebraic effects to the mainstream and with it an arguably better model for concurrency / parallelism
3. Support value types to take advantage of modern CPU caches
Finally golang finds some real competition (from a very unlikely source though). Hoping ReasonML will become more popular with this release with true parallelism and nice concurrency.
ReasonML is now Rescript, and is still using the 4.06 compiler. I think the idea is to move ahead largely independently of Ocaml, and that a move to 5.0 now is probably seriously ambitious given the runtime overhaul.
So it's Reason, not ReasonML which the umbrella project's name, and Rescript is a imcompatible syntax split from the Bucklescript team (that previously transpiled Reason to JS). Bucklescript's new name is... Rescript.
Agree that Java is pretty good with records / sealed types / loom, but one nice thing about the Oracle Java team is they do not add half baked features (primarily since they have the last mover advantage) - for (e.g.) Valhalla will have value types, but they'll be immutable so they can be freely copied and used. Loom will have structured concurrency on debut, which IMHO makes vthreads manageable.
But I've my own apprehensions about loom which actually breaks synchronized blocks (by pinning the carrier thread), and are used extensively in legacy libraries and even in the more recent ones (like opentelemetry java sdk).
Also is this a single file DB? If so is the format stable?