Makes a lot of sense for SQLite to be written in C.
It's a heavily optimized and debugged database implementation: Just look at btree.c with all its gotos :)
The only language that would make sense for a partial/progressive migration is zig, in huge part due to its compatibility with C. It's not mentioned in the article though.
Zig hasn't even had it's first release yet. And projects written in it still break in new releases. Given their stance on boringness and maturity it would make no sense for Sqlite to consider zig yet.
SQLite is never gonna be rewritten by its creators in another language. That is highly doubtful considering the age of SQLite and the roadmap for support I think until 2060 or mid 2050s based on what I’ve read.
> SQLite is never gonna be rewritten by its creators in another language.
Almost certainly correct. It is however being rewritten in Rust by other people https://github.com/tursodatabase/turso. This is probably best thought of as a seperate, compatible project rather than a true rewrite.
I'm biased but when I see Discord as the only way of communication, it doesn't make for a serious project. I wish more projects would rely on IRC/Matrix + forums.
For better or worse, plenty of serious projects are using Discord for communication. It's not great, but IRC and Matrix have their own problems (IMO Zulip is the best of the bunch, but doesn't seem to be particularly widely adopted).
The amount of completely obscure and exotic platforms that have C compiler, and the amount of tooling and analysis tools C has — I'd be surprised anything comparable exists.
We pioneered a lot of things with Opa, 15 years ago now. Opa featured automatic code "splitting" between client and server, introduced the JSX syntax although it wasn't called that way (Jordan at Facebook used Opa before creating React, but the discussions around the syntax happened at W3C notably with another Facebook employee, Tobie).
Since the Opa compiler was implemented in OCaml (we were looking more like Svelte than React as a pure lib), we performed a lot of statical analysis to prevent the wide range of attacks on frontend code (XSS, CSRF, etc.) and backend code. The Opa compiler became a huge beast in part because of that.
In retrospect, better separation of concerns and foregoing completely the idea of automatic code splitting (what React Server Components is) or even having a single app semantics is probably better for the near future. Our vision (way too early), was that we could design a simple language for the semantics and a perfect advanced compiler that would magically output both the client and the server from that specification. Maybe it's still doable with deterministic methods. Maybe LLMs will get to automatic code generation of all parts in one shot before.
Note that the exploits so far haven’t had much to do with “server code/data getting bundled into the client code” or similar which you’re alluding to. Also, RSC does not try to “guess” how to split code — it is deterministic and always user-controlled.
The vulnerabilities so far were weaknesses in the (de)serializer stemming from the dynamism of JavaScript — ability to hijack root object prototype, ability to toString functions to get their code, ability to override a Promise then implementation, ability to construct a function from a string. The patches are patching the (de)serializer to work around those dynamic pieces of JavaScript to avoid those gaps. This is similar to mistakes in parsers where they’re fooled by properties called hasOwnProperty/constructor/etc.
The serialization format is essentially “JSON with Promises and code chunk references”, and it seems like there’s enough pieces where dynamic nature of JS can leak that needed to be plugged. Hopefully with more scrutiny on the protocol, these will be well-understood by the team. The surface area there isn’t growing much anymore (it’s close to being feature-complete), and the (de)serializers themselves are roughly 5 kloc each.
The problem you had in Opa is solved in RSC with build-time assertions (import "server-only" is the server environment poison pill, and import "client-only" is the client environment poison pill). These poison pills work transitively up the module import stack and are statically enforced and prevent code (eg DB code, secrets, etc) from being pulled into the wrong environment. Of course this doesn’t prevent bugs in the (de)serializer but it’s why the overall approach is sound, in the absence of (de)serialization vulnerabilities.
The problem we tried to solve with Opa was more general than RSC, probably too general.
// Opa decides
function client_or_server (x, y) { ... }
// Client-side
client function client_function(x, y) {= }
// Server-side
server function server_function(x, y) {... }
Without the optional side inference (which could also use both), it seems we had similar side constraints, and serializers/sanitizers. Probably with the same flaws as the recent vulnerabilities... Like all the OWASP AppSec circa 2013-2015 range of exploits in browser countermeasures when the browsers where starting to roll out defense in depth with string matching :)
Did you actually look at the blockchain nodes implementation as of 2025 and what's in the roadmap?
Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work.
(not talking about "coins" and stuff obviously, another debate)
> Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work.
What are you comparing against? Aren't they slower, less convenient, and less available than, say, DynamoDB or Spanner, both of which have been in full-service, reliable operation since 2012?
I think they mean big-D "Distributed", i.e. in the sense that a DHT is Distributed. Decentralized in both a logical and political sense.
A big DynamoDB/Spanner deployment is great while you can guarantee some benevolent (or just not-malevolent) org around to host the deployment for everyone else. But technologies of this type do not have any answer for the key problem of "ensure the infra survives its own founding/maintaining org being co-opted + enshittified by parties hostile to the central purpose of the network."
Blockchains — and all the overhead and pain that comes with them — are basically what you get when you take the classical small-D distributed database design, and add the components necessary to get that extra property.
I think you are being downvoted because Ethereum requires you to stake 32 Eth (about $100k), and the entry queue right now is about 9 days and the exit queue is about 20 days. So only people with enough capital can join the network and it takes quite some time to join or leave as opposed to being able to do it at any time you want.
Codeberg is a fork of Gitea, itself a fork of Gogs.
Both forks originated for "philosophical" reasons, not technical ones and Joe Chen (@unknwon on GH) deserves a lot of the merit for building a clean forge in Go mostly by himself.
> Codeberg is a fork of Gitea, itself a fork of Gogs.
Codeberg is a website powered by Forgejo which is a fork of Gitea.
> Both forks originated for "philosophical" reasons, not technical ones
Gitea forked because one developer was the only owner of Gogs' repository and refused to share maintaining rights. The fork was more "practical" than "philosophical".
Forgejo forked when a leading developer secretly created a company with the trademark of Gitea and its logo. The fork was to gain back control over the assets of the project (name/trademark, logo, etc.).
Seems like a 'you either die a hero or you live long enough to become the villain' type of behaviour, which is not uncommon to see in projects like these. Let's hope Codeberg doesn't end up in the same bucket.
That's the reason I don't want to jump on the Codeberg bandwagon just yet, although I'm very interested into self-hosting Forgejo.
I'd love to see something else though, a way to have repositories discoverable across all possible centralized or self-hosted services out there. What I actually do love about GitHub is that from time to time it manages to find for me some quite interesting projects and people to check out.
To note, Codeberg is set up as an _eingetragener Verein_ and has the charitable status, so it's a non profit and the leadership must be elected by the members.
KDE has a similar structure with KDE e.V.
How long do you think until the inevitable community split into the Codeberg People's Front and the People's Front of Codeberg over some minor ideological disagreement?
As your example shows, GPT-5 Pro would probably be better that GPT-5.1, but the tokens are over ten times more expensive and I didn’t feel like paying for them.
Extending beyond the pelican is very interesting, especially until your page gets enough recognition to be "optimized" by the AI companies.
It seems both Gemini 3 and latest ChatGPTs get a deep understanding of the representation of SVGs that seems a difficult task. I would be incapable of writing a SVG without visualizing the result and a graphical feedback loop.
PS: Would be fun to add "animated" in the short prompt since some models think of animation by themselves. Tried manually with 5 Pro (using the subscription), and in a sense it's worse than the static image. To start, there's a error: https://bafybeie7gazq46mbztab2etpln7sqe5is6et2ojheuorjpvrr2u...
I would also be unable to write SVG code to produce anything other than the simplest shapes.
I noticed that, on my page, Gemini 3.0 Pro did produce one animated SVG without being asked, for “#8Generate an SVG of an elephant typing on a typewriter.” Kind of cute, actually.
As for whether the images on the page will enter LLM training data: In the page’s HTML are meta tags I had Claude give me to try to prevent scraping:
Side thought as we're working on 100% onchain systems (for digital assets security, different goals):
Public chains (e.g. EVMs) can be a tamper‑evident gate that only promotes a new config artifact if (a) a delay or multi‑sig review has elapsed, and (b) a succinct proof shows the artifact satisfies safety invariants like ≤200 features, deduped, schema X, etc.
That could have blocked propagation of the oversized file long before it reached the edge :)
reply