I wonder if this is due to the nature of the language.
Lean 4 lets you redefine its syntax in ways that most other languages do not allow[1], so effectively you are dealing with a recursive language, that could require a Touring complete token representation system.
LLMs based on transformers are not Touring complete (nitpick: they are but only if you use arbitrary precision math, which is not the case in practical implementation https://arxiv.org/abs/1901.03429).
DeepSeek is a company whose funds comes from a edge fund. If the edge fund has predicted the impact of all these releases correctly, they have likely made tons of money while at the same time advanced Chinese interests and prestige abroad.
It's interesting that health records were a prime topic at the White House event.
Are we going back to industrialist biopower (control of bodies) after neoliberalism psychopower (control via induced mimetic desires)? Probably a fusion.
It seems to me that big data has only been good enough to predict the past, but the real question is not steering "what do you want yesterday?" but "what do you want tomorrow?" (or, plainly said, why does Netflix catalog suck and do not adapt to my new interests?). AI models that can incorporate psychological or neuro-physiological signals (so the renewed interest in bodies, the explosion of wearable devices like smart rings) and build psychological twins of people to create even more perfect resonance chambers to remote-control people's view of the world and wants. It seem the logical step.
Man is the creature who does not know what to desire, and he turns to others in order to make up his mind. We desire what others desire because we imitate their desires.
The mimetic desire is triangular, based on the subject, model, and object. The subject mimics the model, and both desire the object. Subject and model thus form a rivalry which eventually leads to the scapegoat mechanism.
The scapegoat is chosen arbitrarily. All participants in the removal of the scapegoat must genuinely believe he is guilty. The resulting peace is born out of violence, and this form of violence controlling violence has existed since the beginning of civilizations.
We cannot truly escape this mimetic desire, and any attempts to do so would simply land you playing the game of mimesis on a different level.
In a world with higher and higher turmoil, it's no wonder that people in power are creating the scapegoats they need to divert violence, and remain in power. It feels like the people in power want to replay the roaring 20s, or the Roman empire.
Does it look like it will go toward panem et circenses (UBI and entertainment), or the bad world wide wars of the past, or novel ways to mess the world? ... I am not positive about this, but I hope our collective bets (like AI) turn out to be positive for the world and not just for the elites of a single country.
Yes, the self alone is shouting in the void and appalled and confused at what it hears back. Expand the boundary to encompass enough, loose momentum and be still, and healing happens.
Mindfulness is a tool and a habit that should be embodied in everything.
I recently wrote in a document (filled with too much ego, since it's called "starting points toward a cybernetic theory of karma"):
> I suspect, subjectively we each get the world we need - but you only see when reaching the point of total stillness, so that your energy points outwards from the ordinary. Then your world view shifts, and making sense of how it works with the conceptual tools you have... Truth is a pathless land.
The main reason why we can't do that now is because we require models to be digitally reproducible (IMHO, but also read Geoffrey Hinton's mortal computing).
The energy cost come from error correction as much as training algorithms.
IMHO The problem (for us) with this approach are the logical consequences:
1) if AI large model become more powerful avoiding language, embeddings of AI state become even more tied to the model they originate than now
Consequence: AI progress stalls, as AI user companies need to invest increasing amount of money to reindex their growing corpuses.
This is already a problem, it becomes more of a lock-in mechanism.
If this is overcome...
2) Embeddings become a viral mechanism: it makes sense for a large company that commands a market to impose to its suppliers to use the same AI models, because they can transfer state via embeddings rather than external formats.
This allows to cut down decisions mechanisms that otherwise require expensive coordination mechanism.
3) Eventually this potentially results in another exponential growth and lock-in mechanism, also at the expense of most tech people as more and more is done outside our interface with AI (i.e. programming and software architecture improvements will it self move below language level, we'll have to reverse engineering increasingly opaque improvements).
4) It ends with the impossibility of AI alignment.
I think this is worth reading. It gives a lot of interesting points and cultural references to read further.
Is it relevant for this forum? I think so because engineers don't fully understand how technology shapes how people think. The recent thread about the ethics crisis in tech https://news.ycombinator.com/item?id=42540862 comes to my mind. We act as it doesn't touch us, but our very passivity is the same as Mrs Pelicot's: when the power we serve take away our own personal power (even the banal ones we can see, e.g. installing our own software on our own computers), we wine meekly when it is already too late.
Slavoj Žižek touches how the 92 episodes of rape of Mr Pelicot on the drugged out Mrs Pelicot, who was unaware until the police showed her photos, mirrors how "digital powers treat us today".
How Mr Pelicot is obviously the weaker one, slave to its own compulsions, who can only act on an unaware object, and cannot deal with a person with her own will. A tale we all live in how tech removed the Big Other from our lives (I have also been reading Byung-Chul Han's "Non-Things" book, which contains similar ideas).
"we must not ignore or avoid digital media: their manipulative use is not inherently inscribed into their technology. They can be repurposed for emancipatory purposes [...] The system desperately seeks to control digital media because it recognizes their potential as tools for massive awakenings – including among women"
Finally, he tells about the "Her Story (directed and written by Shao Yihui), exploded on Chinese screens in December 2024. [...] Her Story deserves to be celebrated as an exemplary case of feminine awakening that avoids the traps of politically correct moralizing stiffness. It hits both targets: the strong presence of male chauvinism in Chinese society and the ruling Communists’ solidarity with it (not to mention other critical stabs at state power). "
If "feminine awakening" wasn't happening how did we get MeToo? That was an example of Women learning how to use tech/social media to make their voice heard.
No Engineers were required.
These are Social problems not Engineering problems. And most Engineers would fail a high school Sociology class cause they just aren't exposed to that entire subject. For the education system to produce a Engineer+Sociologist time required would double or triple
Since Learning takes time, how society gets around it is via division of labor/specialization and multi-disciplinary teams.
So if you are in tech/engineering and want to contribute to social problems, get on multi-disciplinary teams which have experts in sociology/economic/political science/psychology/law etc cause they have already put in the time.
Dashboards, like any reductive representation of reality, are inherently limited. They highlight certain aspects while obscuring others, capturing specific data points while ignoring others. This is similar to how a map simplifies a territory or an image represents, but is not, the actual object (like in Magritte's famous painting "The Treachery of Images").
The problem is that dashboards are static snapshots of a dynamic reality. They are built on past understanding, and as systems evolve, they can become outdated and misleading. This can happen in several ways, two common ones are:
- Data drift: The underlying meaning of the data changes. For example, a bug might cause app crashes to be misreported, rendering crash-related metrics unreliable.
- Blind spots: Dashboards can't capture what they're not designed to measure. If user needs shift, a dashboard focused on existing feature usage won't reveal those changes.
This limitation isn't unique to dashboards. Any projection from a higher-dimensional space (reality) to a lower-dimensional representation (dashboard) will inevitably lose information. The problem is exacerbated when the representation doesn't adapt to the changing reality.
I wonder if this is due to the nature of the language.
Lean 4 lets you redefine its syntax in ways that most other languages do not allow[1], so effectively you are dealing with a recursive language, that could require a Touring complete token representation system.
[1] https://leanprover-community.github.io/lean4-metaprogramming... What other language lets you redefine the meaning of digits? The mix of syntax + macros + elaboration makes it really flexible, but hard to treat reliably.
LLMs based on transformers are not Touring complete (nitpick: they are but only if you use arbitrary precision math, which is not the case in practical implementation https://arxiv.org/abs/1901.03429).
reply