This repository was generated by Claude (Anthropic) as an experiment—to explore how Claude works and to reverse-engineer its approach. I spent a few hours discussing the idea with both Claude and ChatGPT, focusing on how to build a content-addressable, multilingual infrastructure for Python.
Two small PRs, each based on a single prompt, resulted in a CLI that captures the core idea. The goal was to make it relatable to others—let me know if it resonates with you.
Observations & Process: Claude asked a lot of clarifying questions, while ChatGPT "hallucinated" a broader "cognitive ecology" (likely because I’d mentioned the project under the name "mobius"). The final README reflects a blend of ChatGPT’s grandiose vision and Claude’s more grounded approach. It frames ouverture as a symbiotic post-LLM relationship—a bridge between existing knowledge (including code) and post-LLM AI systems.
The Bigger Picture: If ouverture succeeds, it could become an infrastructure like npmjs—but with less friction, less drama, and fewer barriers. The irony? The README’s vision remains relevant even without LLMs. The core idea—content-addressable, multilingual code—stands on its own.
Origins & Goals: This idea has been brewing for over a decade. My original goals were:
Code as a reusable resource: Write a function, store it, forget it, and retrieve it later—dependencies and all—without the hassle of reinventing wheels (e.g., leftpad or buried helper functions).
Lowering cognitive barriers: Enable people to contribute to code without requiring English proficiency, aligning with the "think globally, act locally" ethos.
Inspirations: Key projects that shaped this thinking:
Proof of life should the next stop [for more people].
Note: An attacker needs to be stronger than one defense, that is unlike being stronger than the sum of all defenses.
Note2: What we now see as a collective ie. a generated reality, existed before now, and is documented, the following comes to mind 1984, Good Bye Lenin, The Village M. Night Shyamalan, and IIRC Gate to Avalon, by Mamoru Oshii.
Using a dedicated on-disk file format, that is, a custom index has much advantages, optimizations opportunities. I thought using an existing OKVS would be an advantage from a time to market perspective, but it is not the case :)
I think most off-the-shelf stuff is a dead end. You see fairly frequent attempts using both KVS:es and like elasticsearch or similar. Like you can get it to work, but it just doesn't scale.
I am saddened they picked idioms from the database paradigm I invented, fixed the bugs I reported directly to ccorcos upon his request about this very project, and did not even subtly mention my work.
Media elements, JavaScript, RAM limitations, difficult video rendering, and crashing limit its productivity.
Looking for ways to apply a new CCD to pi astrophotography recently, I made two mistakes: I didn't block ads and I opened more than two tabs.
The pi ended up in swap hell for over a minute before I restarted. This is an rpi4 4GB, OC'd with heatsinks and active cooling, on a good IOPS SD. Browsing the web is the only thing I've found that it routinely struggles with.
Swapping on the SD card is a nightmare. I found out that disabling the swap actually leads to a better experience, even if it means that firefox gets killed from time to time.
Would swap on zram work? I've used that to help with low-memory situations, but I'm not sure if the Pi's CPU is fast enough to avoid it being a different bottleneck.
Like other have said javascript makes it slow. sourcehut.org: great. twitter: unsuable. mastodon: barely. github: ok-ish. gmail: ko. protonmail: ko. I use a rpi400 with 4GB of RAM. I forgot to mention the builtin wifi does not work great. I am using Ethernet. I am using the rpi400 to write this message, and I have been using it for two weeks. I tried ubuntu 22.04, 64bit raspberry pi OS too. And I intend to keep using the rpi400, and build my projects on top of it.
Which is remarkable given how many older systems and software the pi can run natively or emulate at full speed. The web really can be embarassing in terms of performance.
It is awful, isn't it? I do wonder how many joules we waste on poorly performing web apps that could be made more efficient by applying less engineering rather than more.
As a javascript dev myself which write javascript heavy websites, I think all of my sites would perform just fine on a raspberry pi, at least on the newer models.
What I am most afraid of is the editing experience mostly.
Then it is not english but a subset of english. They are translation systems that work the way you describe already using restricted natural language grammars.
To the contrary, it would be a superset. The point is that it is not restricted, but would also require additional distinctions that aren't even present in English.
Although, there may be some cases where truly identical meanings in English are collapsed into one. For example, the arguable lack of difference between "it isn't" and "it's not". (Things that are quotations could me marked up as such to not disturb the text in its original language, but quotations could still be translated into other languages.)
It is well-known wikidata does not scale. Whether it is in terms of number of data contribution or number of queries. Not only that, but the current infrastructure is... not great. WBStack [0] try to tackle that but it is still much more difficult to enter the party, than it could be. Changes API? None. That means that it is not possible to keep track of changes in your own wikidata/wikibase instance improved with some domain specific knowledge. Change-request mechanic? Not even in the roadmap. Neither is it possible to query for history of changes over the triples.
Wikidata GUI can be attractive and easy to use. Still, there is big gap between the GUI and the actual RDF dump, that is, making sense of the RDF dump is big endeavor. Who else wants to remember properties by number? It might be a problem of tooling. Question: how to add a new type of object to the GUI? PHP? Sorry.
> Neither is it possible to query for history of changes over the triples.
And why should it? The triples (and hence the full RDF dump as well) are a “lossy” (there's actually two different translations, the “truthy” triples that throw away large parts of the data, and the full dump that reifies the full statements, but is therefore much more verbose) translation of the actual information encoded in the graph. Revision history for the _actual_ items has been queryable via the Mediawiki API for a long time.
Or do you mean "How a B2C or B2B product can take advantage of the change-request mechanic?". Or in other words, "How to implement change-request mechanic in another product?".
Two small PRs, each based on a single prompt, resulted in a CLI that captures the core idea. The goal was to make it relatable to others—let me know if it resonates with you.
Observations & Process: Claude asked a lot of clarifying questions, while ChatGPT "hallucinated" a broader "cognitive ecology" (likely because I’d mentioned the project under the name "mobius"). The final README reflects a blend of ChatGPT’s grandiose vision and Claude’s more grounded approach. It frames ouverture as a symbiotic post-LLM relationship—a bridge between existing knowledge (including code) and post-LLM AI systems.
The Bigger Picture: If ouverture succeeds, it could become an infrastructure like npmjs—but with less friction, less drama, and fewer barriers. The irony? The README’s vision remains relevant even without LLMs. The core idea—content-addressable, multilingual code—stands on its own.
Origins & Goals: This idea has been brewing for over a decade. My original goals were:
Code as a reusable resource: Write a function, store it, forget it, and retrieve it later—dependencies and all—without the hassle of reinventing wheels (e.g., leftpad or buried helper functions). Lowering cognitive barriers: Enable people to contribute to code without requiring English proficiency, aligning with the "think globally, act locally" ethos.
Inspirations: Key projects that shaped this thinking:
- Unison (content-addressable code) - Abstract Wikipedia (multilingual knowledge) - Situated Software
ouverture.py is my answer to [this Lobste.rs discussion on easing the production of micro-libraries](https://lobste.rs/s/ebbaed/how_ease_production_micro_librari...).