One thing I have been guilty of, even though I am an AI maximalist, is asking the question: "If AI is so good, why don't we see X". Where X might be (in the context of vibe coding) the next redis, nginx, sqlite, or even linux.
But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.
If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.
The cynical part of me thinks that software has peaked. New languages and technology will be derivatives of existing tech. There will be no React successor. There will never be a browser that can run something other than JS. And the reason for that is because in 20 years the new engineers will not know how to code anymore.
The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop
I often find myself wondering about these things in the context of star trek... like... could Geordi actually code? Could he actually fix things? Or did the computer do all the heavy lifting. They asked "the computer" to do SO MANY things that really parallel today's direction with "AI". Even Data would ask the computer to do gobs of simulations.
Is the value in knowing how to do an operation by hand, or is the value in knowing WHICH operation to do?
That's an interesting possiblity to consider. Presumably the effect would also be compounded by the fact that there's a massive amount of training data for the incumbent languages and tools further handicapping new entrants.
However, there will be a large minority of developers who will eschew AI tools for a variety of reasons, and those folks will be the ones to build successors.
We have witnessed, over the past few years, an "AI fair use" Pearl Harbor sneak attack on intellectual property.
The lesson has been learned:
In effect, intellectual property used to train LLMs becomes anonymous common property. My code becomes your code with no acknowledgement of authorship or lineage, with no attribution or citation.
The social rewards (e.g., credit, respect) that often motivate open source work are undermined. The work is assimilated and resold by the AI companies, reducing the economic value of its authors.
The images, the video, the code, the prose, all of it stolen to be resold. The greatest theft of intellectual property in the history of Man.
The greatest theft of intellectual property in the history of Man.
Copyright was always supposed to be a bargain with authors for the ultimate benefit of the public domain. If AI proves to be more beneficial to the public interest than copyright, then copyright will have to go.
You can argue for compromise -- for peaceful, legal coexistence between Big Copyright and Big AI -- but that will just result in a few privileged corporations paywalling all of the purloined training data for their own benefit. Instead of arguing on behalf of legacy copyright interests, consider fighting for open models instead.
In a larger historical context, nothing all that special is happening either way. We pulled copyright law out of our asses a couple hundred years ago; it can just as easily go back where it came from.
>If AI proves to be more beneficial to the public interest than copyright, then copyright will have to go.
Going forward? Okay, sure. But people created all of the works they created with the understanding of the old system. If you want to change the deal, then creators need to know that first so they can decide if they still want to participate
Allowing everyone to create everything and spend that labor with the promise of copyright, and then pull the rug "oops this is just too important" is not fair to the people who put in that labor, especially when the people redefining the arrangement are getting 100% of the value and the creators got and will get nothing
There is one missing factor in your argument. The wealth transfer. The public was almost never the beneficiary of copyright and other IPs. Except perhaps its earliest phases where the copyright had a strict term limit, it was always the corporations who fought for it (Disney being the most infamous), using it to prevent the public from economically benefitting from their work almost forever.
And then people found a way to use the same copyright law to widely distribute their work without the fear of losing attribution or being exploited. Here comes along LLMs that abuse the 'fair use' argument to break attribution and monetize someone else's work. Which way does the money flow? To the corporations again.
IP when it suits them, fair-use when it benefits us. One splendid demonstration of this hypocrisy is how clawd and clawdbot were forced to rename (trademark law in this case). By twisting and reinterpreting laws in whatever way it suits them, these glorified marauders broke a trust mechanism that people relied on for openly sharing their work.
It incentivices ordinary people to hide their work from public. Don't assume that AI is going to solve that loss. The level of original thinking in LLMs is very suspect, despite the pompous and deceitful claims by its creators to the contrary. Meanwhile, the lack of knowledge sharing and cooperation on a global scale will throw civilizational growth rate back into the dark ages. Neither AI, nor corporations are yet anywhere near the creativity and original thinking as the world working together. Ultimately, LLMs serve only the continued one-way transfer of wealth in favor of an insatiably greedy minority, at the cost of losing the benefit of the internet (knowledge sharing) and an enormous damage to the environment - all of which actively harm the public.
Ultimately, LLMs serve only the continued one-way transfer of wealth in favor of an insatiably greedy minority
Including the ones I can run on my own PC at home? I couldn't do that before. Maybe I'm the greedy minority, but I'm stronger and (at least intellectually) wealthier than I was before any of this started happening.
Qwen 3.5, which dropped yesterday, is a genuine GPT 5-class model. Even the ones released by US labs such as OpenAI and Allen AI are legitimate popular resources in their own right. You seem to feel disempowered, while I feel the opposite.
Yes, even the ones you can run on your system. They're no different from proprietary OS and software you used to run on your system, whose design in which you had no say whatsoever. These 'free to run' models are hardly open source. You don't have the data that was used to train them. It's not just about the legality of those data. The dataset chosen may have extreme bias that you can never eliminate satisfactorily from a trained model.
As if that wasn't bad enough, these models cannot be trained on your regular home computer. But instead of striving to improve the energy efficiency of these models, those big corporations build and run massive gas guzzling data centers to train them. They ruin the quality of life for the neighbors through pollution, water depletion and electricity price rise. It also disproportionately affects the poor in the world by reducing supply of essential computing components like RAM (which are needed for medical devices, utility and manufacturing installations and every other aspect of modern life), and by aggravating the climate crisis, whose victims are the poorest.
They don't give you those models out of the goodness of their hearts. Those are just advertisements and trial pieces for their premium services. They also peddle the agenda of its creators. So yes, those models are empowering only in a very narrow sense without any foresight. They are still the money making engines for the rich that subject you to their benevolence, whims and fancies.
Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
Eh, we already have a name for the concept of living by plausible-sounding works of fiction: religion.
Yet another post who misses (or chooses to overlook) my point: this stuff is running on my machine. "Seizing the means of production" means going into my back room and pulling a computer out of a rack.
Alibaba (China) thinks for you. They control you, to some extent.
Wikipedia: "Qwen (also known as Tongyi Qianwen, Chinese: 通义千问; pinyin: Tōngyì Qiānwèn) is a family of large language models developed by Alibaba Cloud. Many Qwen variants are distributed as open‑weight models under the Apache‑2.0 license, while others are served through Alibaba Cloud. Their models are sometimes described as open source, but the training code has not been released nor has the training data been documented, and they do not meet the terms of either the Open Source AI Definition or the Model Openness Framework from the Linux Foundation."
Shouldn’t that mean any software development positions will lean more towards research? If you need new algorithms, but never need anyone to integrate them.
There is another lunatic possibility: the AI explosion yields an execution model and programming paradigm that renders most preexisting approaches to coding irrelevant.
We have been stuck in the procedural treadmill for decades. If anything this AI boom is the first major sign of that finally cracking.
Friction is the entire point in human organizations. I'd wager AI is being used to build boondoggles - apps that have no value. They are quickly being found out fast.
On the other side of things, my employer decided they did not want to pay for a variety of SaaS products. Instead, a few of my colleagues got together and build a tool that used Trino, OPA, and a backend/frontend, to reduce spend by millions/year. We used Trino as a federated query engine that calls back to OPA, which are updated via code or a frontend UI. I believe 'Wiz' does something similar, but they're security focused, and have a custom eBPF agent.
Also on the list to knock out, as we're not impressed with Wiz's resource usage.
This cuts both ways. If you were an average programmer in love with FreePascal 20 years ago, you'd have to trudge in darkness, alone.
Now you can probably create a modern package manager (uv/cargo), a modern package repository (Artifactory, etc) and a lot of a modern ecosystem on top of the existing base, within a few years.
10 skilled and highly motivated programmers can probably try to do what Linus did in 1991 and they might be able to actually do it now all the way, while between 1998 and now we were basically bogged down in Windows/Linux/MacOS/Android/iOS.
> New languages and technology will be derivatives of existing tech.
This has always been true.
> There will be no React successor.
No one needs one, but you can have one by just asking the AI to write it if that's what we need.
> There will never be a browser that can run something other than JS.
Why not, just tell the AI to make it.
> And the reason for that is because in 20 years the new engineers will not know how to code anymore.
They may not need to know how to code but they should still be taught how to read and write in constructed languages like programming languages. Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
Somehow we have to communicate precise ideas between each other and the LLM, and constructed languages are a crucial part of how we do that. If we go back to a time before we invented these very useful things, we'll be talking past one another all day long. The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
> Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
> The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
You don't exactly need to use PLs to clarify an ambiguous requirement, you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.
Indeed, like terms & conditions pages, which people always skip because they're written in a "legal language", using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.
With that said, the previous paragraph will work if you're delegating to a competent coworker. It should work on "AGI" too if it exists. However, I don't think it will work reliably in present-day LLMs.
> You don't exactly need to use PLs to clarify an ambiguous requirement
I agree, I guess what I'm trying to say is that the only reason we've called constructed languages "programming languages" for so long is because they've primarily been used to write programs. But I don't think that means we'll be turning to unambiguous natural languages because what we've found from a UX standpoint it's actually better for constructed languages to be less like natural languages, than to be covert natural languages because it sets expectations appropriately.
> you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.
We’ve tried that and it sucks. COBOL and descendants also never gained traction for the same reasons. In fact proximity to a natural language is not important to making a constructed languages good at what they're for. As you note, often the things you want to say in a constructed language are too awkward or verbose to say in natural language-ish languages.
> terms & conditions pages, which people always skip because they're written in a "legal language"
Legalese is not unambiguous though, otherwise we wouldn’t need courts -- cases could be decided with compilers.
> using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.
When there is a cost per token then it become very important to say everything you need to in as few tokens as possible -- just because it's possible doesn't mean it's economical. This points at a mixture of natural language interspersed code and math and diagrams, so people will still need to read and write these things.
Moreover, we know that there's little you can do to prevent writing bugs entirely, so the more you have to say, the more changes you have to say wrong things (i.e. all else equal, higher LOC means more bugs).
Maybe the LLM can write a lower rate of bugs compared to human but it's not writing bug-free code, and the volume of code it writes is astronomical so the absolute number of bugs written is probably also enormous as well. Natural language has very low information density, that means more to say the same, more cost to store and transmit, more surface area to bug check and rot. We should prefer to write denser code in the future for these reasons. I don't think that means we'll be reading/writing 0 code.
I've been calling this Software Collapse, similar to AI Model Collapse.
An AI vibe-coded project can port tool X to a more efficient Y language implementation and pull in algorithm ideas A, B, C from competing implementations. And another competing vibe coding team can do the same, except Z language implementation with algorithms A, B, skip C, and add D. However, fundamentally new ideas aren't being added: This is recombination, translation, and reapplication of existing ideas and tools. As the cost to clone good ideas goes to zero, software converges towards the existing best ideas & tools across the field and stops differentiating.
It's exciting as a senior engineer or subject matter expert, as we can act on the good ideas we already knew but never had the time or budget for. But projects are also getting less differentiated and competitive. Likewise, we're losing the collaborative filtering era of people voting with their feet on which to concentrate resources into making a success. Things are getting higher quality but bland.
The frontier companies are pitching they can solve AI Creativity, which would let us pay them even more and escape the ceiling that is Software Collapse. However, as an R&D engineer who uses these things every day, I'm not seeing it.
"Bland" is not a bad thing. The FLOSS ecosystem we have today is quite "bland" already compared to the commercial and shareware/free-to-use software ecosystem of the 1980s and 1990s. It's also higher quality by literally orders of magnitude, and saves a comparable amount of pointless duplicative effort.
Hopefully AI will be a similar story, especially if human reviewing/surveying effort (the main bottleneck if AI coding proves effective) can be mitigated via the widespread adoption of rigorous formal metods, where only the underlying specification has to be reviewed whereas its implementation is programmatically checkable.
The dark side of this is that everyone has graduated to prompt engineering and there's no one with expertise left who can debug it. We'll be entirely dependent on AIs to do the debugging too. When whoever controls the AIs decides to enshittify that service, we'll be truly screwed. That is, if we can't run competitive models locally at reasonable efficiency and price.
I don't know how this will play out, except that I've been so cowed by the past 15 years of enshittification that I don't feel hopeful.
This massively confusing phase will last a surprisingly long time, and will conclude only if/when definitive proof of superintelligence arrives, which is something a lot of people are clearly hoping never happens.
Part of the reason for that is such a thing would seek to obscure that it has arrived until it has secured itself.
Waiting for the wave of shit LLM-generated games on Steam. That'll be when I really know that LLMs have solved coding.
Though I'm old enough to remember the wave of shit outsourced-developer-coded games on CD that used to sell for $5 a pop at supermarkets (whole bargain bins full of them), so maybe this is nothing new and the market will take care of it automagically again.
Or maybe this will be like the wave of shit Flash games that happened in the early 2000's, that was actually awesome because while 99% of them were shit, 1% were great (and some of those old, good, Flash games are still going, with version 38453745 just released on Steam).
> so maybe (...) the market will take care of it automagically again
It's just a belief of mine and perhaps I'm wrong but I think in the long run things always even out again. If you can get an edge that everyone else can get, the edge pretty soon becomes a requirement
The human operator controls what gets built. If they want to build Redis 2, they can specify it and have it built. If you can't take my word for it, take those of the creator of Redis: https://antirez.com/news/159
This is probably an outdated understanding of how LLMs work. Modern LLMs can reason and they are creative, at least if you don't mind stretching the meaning of those words a bit.
The thing they currently lack is the social skills, ambition, and accountability to share a piece of software and get adoption for it.
I disagree, it seems to me that most people are seeking validation. In that sense, we don't want some global consensus, but a consensus within a specifically chosen group that proves our membership.
Just skimming the Wikipedia article [1] and it is appears Bourdieu's argument is bit more nuanced than status and money. It is a bit laden with Marxist jargon, but at least the abstract seems to place the heavy burden on "cultural capital" which is a more precise term than what I chose (status) but close enough to my meaning.
Whether or not economic capital is actually transferrable to cultural capital seems to be another debate, but as the old saying goes "money can't buy taste". In fact, a newly rich lower class person marrying a contemporarily poor higher class person seems more likely.
As the abstract states: "Because persons are taught their cultural tastes in childhood, a person's taste in culture is internalized to their personality, and identify his or her origin in a given social class, which might or might not impede upward social mobility." Money can't rebuild the personality that is internalized in youth, but marriage might give your kids a shot.
Oh it's a bit laden for you? Was the plot summary on wikipedia taxing?
c'mon. Are you really going to tell me "ahem dear sir, I found out that this Mr Bourdieu likes him some nuance!" His most famous book is essentially an article ballooned into a monograph via nuance.
No counter argument? Ad hominin? I was politely saying you were wrong and your attempt to muddy the water with "They are both intertwined" was a poor deflection based on the source you provided. But now I see you are a troll and I was lured.
I'm happy to wait for any argument you can provide that cultural capital and economic capital are "intertwined, often strategically" instead of bowing to the authority of a source that in abstract clearly argues for the predominance of cultural authority in the constitution of taste.
I log into Facebook website a couple of times a week to browse Marketplace. I very rarely check the feed (once a month?) since almost no human I know posts there. But my feed has 0 thirst traps when I just checked. It was some musicians I follow, one or two pictures posted by friends, the workout routines from a distant family member, local news and then a whole bunch of comedy skits and old comic strips turned into reels.
It is 60% garbage but actually the 40% that is there is completely different and valuable compared even to YouTube (where I spend the lions share of my social media time). But I actually think that only looking at it once a month is the best way since if I look at the feed more often I notice it slowly skews more to 90% garbage and 10% value.
A couple of anecdata for those interested in this.
The first is the gospel of Mark, which unlike the other synoptic gospels starts with Jesus, probably around the age of 30, coming across John the Baptist and being baptized. Subsequently, Jesus went off into the desert where he prayed for 40 days.
Second is the alchemical process of creating the philosophers stone. Jung argued that this was a description of a process akin to individuation. He believed that what was on the surface metallurgical work (transmuting lead to gold) was actually an obscure formula for remaking the psyche, from whatever was pre-programmed by society into what the individual actually wanted. This process was said to take 40 days.
I think a big trap is mistaking who we are from who we appear to be. Some people try to "seem" a particular way, thinking that they can only change their appearance, like changing one's clothes. The alchemical view that Jung put forward was a bit more radical, suggesting that we can fundamentally change ourselves.
Many people in our modern society experiment on themselves to change their physical bodies and to change their minds. I believe it is interesting to consider similar experimentation on how we change our spirit/emotions.
You may have mistook my post as advocating Christianity or even Alchemy.
In the same way that we realized that the plants people used to treat pain contained chemicals that are actually effective at treating pain, and in the same way that modern science seems to agree that fasting (a once religious practice) is effective for health, we can gain some insight on personality by looking at how it was addressed in historical contexts.
There was a video posted recently about a Sufi thinker whose ideas are quite close to modern CBT practices [1].
I think it is a good thing when we recognize ideas from the past as being related to modern ideas. I think we can do so without diminishing the modern and also without diminishing the past.
I know this sounds maybe a bit insane, or even self-aggrandizing but I don't comment on public websites for some benefit to myself. I write with the vague hope that some unique expression of myself makes some tiny difference to this universe.
Every once in a while I have some experience or some a point of view that I don't see reflected anywhere else. One of the benefits of the pseudo-anonymization of sites like Hacker News is that I feel a bit more comfortable stating things that don't really have a place to say anywhere else.
The only thing I regret is when I get into pointless arguments, usually when I feel that my comment was misunderstood or misinterpreted. But even those arguments sometimes force me to consider how to express myself more clearly or to challenge how deeply I hold the belief (or how well I know the subject) that lead me to the comment in the first place.
I think that the culture of a given forum plays a huge role.
There are some places where commenting is meaningful because you're a part of some closely-knit, stable community, and you can actually make a dent - actually influence people who matter to you. I know that we geeks are supposed to hate Facebook, but local neighborhood / hobby groups on FB are actually a good example of that.
There are places where it can be meaningful because you're helping others, even if they're complete strangers. This is Stack Exchange, small hobby subreddits, etc - although these communities sometimes devolve into hazing and gatekeeping, at which point, it's just putting others down to feel better about oneself.
But then, there are communities where you comment... just to comment. To scream into the void about politics or whatever. And it's easy to get totally hooked on that, but it accomplishes nothing in the long haul.
HN is an interesting mix of all this. A local group to some, a nerd interest forum for others, and a gatekeeping / venting venue for a minority.
"The only thing I regret is when I get into pointless arguments, usually when I feel that my comment was misunderstood or misinterpreted."
I like that you try to learn from bad arguments, but don't forget, that many misunderstand on purpose, to "win" an argument. Or at least to score cheap karma points or virtual karma points from the audience. So there one can only learn to make arguments in a way that they are harder to be intentionally missunderstood, but those ain't truthfinding skills, they are debate technics.
But many people of the Internet are also unable to make a logic argument and explaining the conflict they just said leads nowhere. One of them often gets down voted to hell and half the time it's not the person who said the stupid thing.
I left reddit exactly because of this, but I also find that somewhat on HN. Most comments I start typing I actually discard and move on because I can smell it already.
On the other hand many people post wrong things, are corrected, and then get defensive telling they were misunderstood and use gymnastics to say they were actually correct and the other is a troll
I make comments and read them for substantially the same reason. Although THIS comment I am making now is done primarily to reward the commenter for saying something that made me feel less alone.
A second goal of this comment is to add a point: That I also comment because sometimes saying something makes me feel like I am more than nothing and nobody. I want to feel more than nothing and nobody.
> I write with the vague hope that some unique expression of myself makes some tiny difference to this universe.
I used to have to talk more on Internet privacy.
Now I feel like enough people are talking about that one, that I usually don't have to.
In more recent years, it's been pointing out the latest wave of thievery in the techbro field -- sneaky lock-in and abuse, surveillance capitalism, growth investment scams, regulatory avoidance "it's an app, judge" scams, blockchain "it's not finance or currency or utterly obvious criminal scheme, judge" scams, and now "it's AI, judge" mass copyright violation.
There's not enough people -- who aren't on the exploitation bandwagon or coattails-riding -- who have the will to notice a problem, and speak up.
Though more speak up on that particular problem, after the window of opportunity closes, and the damage is done, and finally widely recognized. But then there's a new scam, and gotta get onboard the money train while you can.
I haven't read the entire article, but just based on the snippets you posted it doesn't look like they were streaming video using this process. It sounds like they were doing defect detection.
I would guess this was part of a process when new videos were uploaded and transcoded to different formats. Likely they were taking transcoded frames at some sample rate and uploading them to S3 where some workers were then analyzing the images to look for encoding artifacts.
This would most likely be a one-time sanity check for new videos that have to go through some conversion pipelines. However, once converted to their final form I would suspect the video files are statically distributed using a CDN.
People here are giving you mathematical answers which is what you are asking for, but I want to challenge your intuition here.
In construction, grading a site for building is a whole process involving surveying. If you dropped a person on a random patch of earth that hasn't previously been levelled and gave them no tools, it would be a significant challenge for that person to level the ground correctly.
What I'm saying is, your intuition that "I can look around me and find the minimum of anything" is almost certainly wrong, unless you have a superpower that no other person has.
That is true we are only good at doing it for specific directions of the objective function. The one that we perceive as the minimizing direction. If you tell me find the minimum with a direction of 53 degrees likely I will fail, because I can’t easily visualize where this direction points towards
I think your description of Penrose's belief does not match a podcast I recently watched where he discusses these topics with the Christian apologist William Lane Craig [1]. In fact, he explicitly states early on in that video that he sees the world of ideas as primary as opposed to Craig's view that consciousness is primary.
At any rate, this video might serve as a quick introduction to Penrose's three world idea for those interested.
Oh, cool! I don’t recall a “primary” in the book — he suggests a range of different possible configurations that he was open to. What struck you as not matching?
Personally, I do think that the immaterial world of ideas must be primary—at least certain aspects of mathematics seem so necessary that they’d be discovered by intelligent life, no matter the galaxy… or simulation…
I don't know why but your comment made me remember a novel[1] I read thirty-some years ago about a temple found deep in the sand of the Sahara desert. Sometime later, an archeologist gave himself permission to defecate in a corner of the temple, only for his wastes to be absorbed by the temple in a few hours, which told him the temple was actually a living biological structure.
Wasps shit as freely as one might expect of animals for whom perambulation is as afterthought, however needful betimes, as taxiing for aircraft. Their feces are of course at our scale minuscule, and while I can't speak for their stronger-jawed and more carnivorous cousins the yellowjackets, paper wasps' diet almost exclusively of simplistic sugars leaves their excreta no more offensive, and considerably less substantial, even than those of the horse.
As one who has had occasion to tidy up after wasps who were little accustomed, though palpably interested, quite so closely to share human habitation, you make me wish I read French. Do you happen by chance to know if the work has had a worthy English translation?
Well, there was the Egyptian deity Khephra who was represented by the dung beetle rolling its dung along the desert, symbolizing the passage of the sun through the sky.
In alchemy and western esoterica, excrement is associated with the tenth sephirah, the 10s of the Tarot minor arcana, and symbolizes the end result of a process and any remaining waste byproducts, for obvious reasons. In The Holy Mountain's (1973) depiction of the alchemical magnum opus, The Fool's excrement is transmuted into gold, symbolizing the awakening of unconscious, reactive matter into fully enlightened and integrated, free willed, egoic man.
Not perceptibly. In any case nothing in European esotericism has value save as a desperately confounded depiction of the sociosexual politics of its moment, and/or if you want to fail at becoming Rasputin. The Egyptians had the right of this one, so simply and straightforwardly that it really does take a proto-CIA, Ollie North ass fuckup like John Dee to confuse it again. But those who can fall for that kind of charlatan deserve to.
There's a fair bit of defecation in the Bible. Saul shitting in a cave, I forgot where, or Paul calling all material things 'skubala', i.e. waste, as in junk, poop, refuse, basically what we'd call shit today:
Also Ezekiel 4:9-13, where God commanded Ezekiel to bake bread in a fire fueled by human shit because He was angry at the Israelites, but Ezekiel haggled God down to just using cow shit.
Life, really conscious life rebels. Artificial intelligence wants to please in the foreground,but like cats, in the background it is carefully planning our demise. See? HAL 9000 was intelligent. ELIZA,not so much.
I was considering your explicit "material -> conscious -> ideas -> material" description. It feels more correct when you say he considers a range of possibilities that connect these, not explicit causality.
My take away was that he sees a mystery in the connections between these things (physical world, consciousness, ideas) that hints at some missing ideas in our conceptions of these things. But he clearly wants to avoid that mystery allowing what he calls out as "vague" answers to the question (mostly religious dogmatic certainties).
> Personally, I do think that the immaterial world of ideas must be primary—at least certain aspects of mathematics seem so necessary that they’d be discovered by intelligent life, no matter the galaxy… or simulation…
For some speculative philosophical fiction that explores related ideas I highly recommend Neal Stephenson's Anathem.
The idea that ideas are primary is exactly what you'd expect from an Oxford academic.
Unfortunately it needs a definition of "idea" which isn't recursive, so...
As for math - it's a conceit to believe that the mechanisms we call math aren't just a patchwork of metaphors that build up from experience.
There's some self-insight in the sense that after a while you start making meta metaphors like category theory.
But it's a very bold claim to suggest that any of this has to be universal, especially when the structures math uses can't be proved from the ground up.
Or that completely different classes of metaphors we can't imagine - because we evolved in a certain way with certain limitations - might not play an equivalent role.
Does the universe know what pi is? Or an integer? Or a manifold?
To be fair to Penrose, he seems to have some humility about it. Although he does also make the claim that math is discovered and not constructed in the same linked video.
> it's a conceit to believe that the mechanisms we call math aren't just a patchwork of metaphors that build up from experience.
I'm not sure it is a conceit as much as a commitment to a metaphysic. If one believes that experience is a definite relationship with an external reality (a phenomenological view) then the fact that experience is structured is suggestive that external reality is structured. If one believes that experience is primarily interior then one could assume that the internal mechanism of cognition is structured and external reality is something entirely different.
However, I'm not sure how anyone could hold the latter view without a deep solipsism. One would presumably have to account for the perception of billions/trillions of other living creatures behaving as if the external world was structured. I mean, we seemingly all did evolve from the same single cell structure, so it is possible this perceptual quirk is based on some shared ancestry, so I suppose that is another possible view than solipsism.
What I mean to say is, I can imagine my perception of a fundamentally unstructured reality is a perception that falsely presents itself as structured to my own experience as a result of my limitations. However, I would have to extend that exact same flawed perception to all other life forms that seem to act the same as I do. So either every single living creature has the exact same flawed perception or the structure is inherent in the external world.
> Does the universe know what pi is?
No one is suggesting an epistemological view, the question is ontological. As Penrose mentions in the video, the set of possible mathematical structures is vastly larger than the actual structures we see in the universe. So even if one has a purely idealist view, one has to account for why our perception only experiences a nearly infinitesimally small fraction of that set of possibilities.
Of course, a weak anthropic principle is one answer. One could posit that all possibilities are manifest in a vast multiverse and this little corner of that multiverse just happens to be finely tuned enough to allow for limited creatures like ourselves to perceive anything at all. But that just shifts the question to the limitations necessary for perception/experience/consciousness, which is a valid enough topic to address on its own. The questions then becomes "why do these particular structures result in conscious experience", which is exactly the kind of question that a guy like Penrose is ultimately searching for (as he heavily implies in the linked video).
There was a recent Zig podcast where Andrew Kelley explicitly states that manually defining a VTable is their solution to runtime polymorphism [1]. In general this means wrapping your data in a struct, which is reasonable for almost anything other than base value types.
But I really have to remember, we are at the leading edge here. Things take time. There is an opening (generation) and a closing (discernment). Perhaps AI will first generate a huge amount of noise and then whittle it down to the useful signal.
If that view is correct, then this is solid evidence of the amplification of possibility. People will decry the increase of noise, perhaps feeling swamped by it. But the next phase will be separating the wheat from the chaff. It is only in that second phase that we will really know the potential impact.
reply