Hacker News new | past | comments | ask | show | jobs | submit login
Self-driving as a case study for AGI (karpathy.github.io)
144 points by askytb on Jan 22, 2024 | hide | past | favorite | 193 comments



Seeing the guy who couldn't deliver FSD decide to make analogies from FSD to AGI does actually give me confidence we are decades away.

Yes, yes, I know there's like 1-2 companies that have highly modified vehicles that are pretty good, in a limited geofenced area, in good weather, at low speed, driving conservatively, local roads, most of the time. This is not "FSD".

They've been making very impressive incremental improvements every few years for sure. I had a Tesla for nearly 5 years and it was "wow" at first, and then "heh, I guess it's a little better" every year after that.

But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.

Apply the same "sorta kinda almost" definition to AGI and yeah sure, maybe in 10 years. Really really actually solved? Hah.


AGI has become a philosophical term in the way you are using it. Which is fine to discuss philosophy, but to the point of the article, AI enabled automation is beginning to have significant impact on the economy due to the new functionality.


Where have LLM AIs made measurable economic impacts that weren't already using some form of AI to start with (translation, legal, content farms, marketing, robotics)?


SubReddits for copywriting and graphic design and digital art are full of people talking about how the amount of gig work available has dropped off.


Software developers on HN are saying the same thing. I don't see much evidence that AI has changed the game such that it would cause the work to dry up, though. In fact, I am surprised it hasn't created more work exploring the possibilities of new AI APIs.

But when you can get 5% returns just by sticking your money in the bank, why would you bother investing in software, where 99¢ is considered an exorbitant cost by its customers? That is no doubt why all of these creative industries are on the decline. There is, generally speaking, no money to be made.


I think software devs on here are conflating AI with a hidden recession and other economic factors causing a slump in tech. I still can’t find a single thing that AI could reasonably replace about my job even if I was a junior developer. As a senior engineer using LLM tooling, it offers some benefits, but it’s still nowhere near “job stealing” capabilities.


There's a new fallacy, where AI is dismissed if it "can't replace" a human completely. AI is more like an augmentation tool, that allows fewer people to create work faster by effectively outsourcing the grunt work.

Better tools like Copilot and ChatGPT reduce the amount of time it takes to deliver a feature, reducing the need to horizontally scale developers.

Why hire 10 when 3 could do the same work with AI tooling?

I think while tech is obviously contracting post-COVID from overhiring, it's also true that you just don't need as many people anymore.


> Why hire 10 when 3 could do the same work with AI tooling?

Why hire 3 when you can hire 10 to do the work of 33? It was only a couple of years ago when businesses were boasting "We can't possibly hire enough."

But, of course, there isn't enough money to be made anymore, which is the real reason creative industries are on the decline.


Fair examples!

Also, not to move the goal posts, but there has been a large economic shift since Q3 2022 with tech & finance belt tightening, inflation normalizing and generally slowing. So these things don't happen in a vacuum (which makes them hard to measure!).

For example my previous & current employer have done their first layoffs since pre-pandemic, and neither has anything to do with AI. They just overexpanded and need to shrink.


I think for us to say it's "beginning to have a significant impact on the economy", there needs to be an actual measurable increase in productivity due to AI-enabled automation, which so far has been elusive. It might be impacting employment in some specific jobs, but the mix of jobs in the economy is always constantly changing.


Just because something is difficult to measure does not mean it doesn't exist. People who have automated their jobs are unlikely to report that, since they like having a paycheck. But that doesn't mean it isn't happening.


It's not difficult to measure though - we've been tracking total factor productivity of our economy for decades. Steam power, electrification, computers - all of these things had huge, measurable impacts on the amount of economic output vs input. No self-reporting (of what?) necessary. If AI means that companies are getting the same or more output from less input of labor and capital, productivity should be soaring right now.


> Steam power, electrification, computers - all of these things had huge, measurable impacts on the amount of economic output vs input.

Well... maybe not computers, which seems relevant: https://en.wikipedia.org/wiki/Productivity_paradox#IT_unprod...


Ha, I was thinking of that when I typed it. I agree though - I think many of the AGI boosters on HN would be very surprised by how little economic productivity increased thanks to computers and the Internet.


If we measure economic output in dollar terms, heightened competition could result in better quantity, quality, and variety of products, such as shows or software available to an average global consumer, while not raising their expenses nor aggregate income for the producing companies.

Thus, in some market segments, it's possible for real productivity to increase without having a significant impact on economically measured output.


@gitfan86 - You would measure that the same way productivity is always measured - The company's overall economic output would be unchanged, and their labor inputs would have decreased, so the total factor productivity would have increased by virtue of automating the DEI group (just the same as when companies used 'mail-merge' to automate large groups of people doing that work manually, for instance).


Electrical output, number of units shipped can be measured.

How do you measure the output of a DEI department? Now assume those people automated their jobs with Chat GPT. How would you measure the change in productivity?


>But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.

Having ridden in a lot of waymo's which can handle SF (urban stuff) and the phoenix area (highways and suburban stuff) perfectly well, I feel quite confident that that could happen right now.


> Seeing the guy who couldn't deliver FSD decide to make analogies

Reductive and rude phrasing.


10Ks.. possibly 100K people were sold $5-15k worth of software that doesn't work and won't be refunded. I think that's pretty rude.


I think what Andrej is describing is more "automation" than AGI. His discussion of self-driving is more analogous to robots building cars in a Tesla factory displacing workers than anything AGI. We've already had "self driving" trains where we got rid of the human train driver. Nothing "AGI" about that. The evolution of getting cars to self drive not necessarily making the entity controlling the car more human-like intelligent. It's more like meeting in between the human driver and the factory robot +/- some technology.

So how to define AGI? I'm not sure economic value factors here. I would lean towards a definition around problem solving. When computers can solve general problems as well as humans, that's AGI. You want to find a drug for cancer, or drive a car, or prove a math theorem, or write a computer program to accomplish something, or whatever problems humans solve all the time. (EDIT: or reason about what problems need to be solved as part of addressing other problems.) There's already classes of problems, like chess, where computers outperform humans. But I mean calculators did that for arithmetic a long time ago. The "G" part is whether or not we have a generalized computer that excels at everything.


It's a meaningless distinction. You basically get sucked into a "what has AI ever done for us?" style debate analogous to Monty Python's Life of Brian. It's impossible to resolve. But the irony of course is the huge and growing list of things it is actually doing quite nicely.

We'll have decently smart AIs before we nail down what that G actually means, should mean, absolutely cannot mean, etc. Which is usually what these threads on HN devolve into. Andrej Karpathy is basically side stepping that debate and using self driving as a case study for two simple reasons: 1) we're already doing it (which is getting hard to deny or nitpick about) and 2) it requires a certain level of understanding of things around us that goes beyond traditional automation.

You are dismissing self driving as mere "automation". But that of course applies to just about everything we do with computers. Driving is sufficiently hard that it seems to require the best minds many years to get there and we're basically getting people like Andreij Karpathy and his colleagues from Google, Waymo, Microsoft, Tesla, etc. bootstrapping a whole new field of AI as a side effect. The whole reason we're even talking about AGI is those people. The things you list, most people cannot do either. Well over 99% of the people I meet are completely useless for any of those things. But I wouldn't call them stupid for that reason.

Some people even go as far to say that we won't nail self driving without an AGI. But then since we already have some self driving cars that are definitely not that intelligent yet, they are probably wrong. For varying definitions of the G in AGI.


> You basically get sucked into a "what has AI ever done for us?" style debate analogous to Monty Python's Life of Brian.

Except today the bit (which wasn’t really a debate in the sketch because everyone agreed) would start with real current negatives such as accelerating the spread of misinformation and getting artists fired. In your analogy, it would be as if they were asking “what have the Romans ever done for us” during the war. Doesn’t really work.


I don't consider people having to adjust a negative. We don't have a right to never have to adjust or adapt to a changing world. Things change, people adapt. Well some of them. The rest just gets old and dies off. Artists will be fine; so will everybody else. If anything, people will have a lot more time to do artistic things. More than ever probably and possibly at a grander scale that past generations of artists could only dream about.

Misinformation, aka. propaganda, is as old as humanity. Probably even the Romans were whining about that back in the day. AIs are doing nothing new here. And it's not AIs spreading misinformation but people with an agenda that now use AIs as tools to generate it. People like that have always existed and they've always been creative users of whatever tools were available. We'll just have to deal with that as well and adapt.


> Things change, people adapt. Well some of them. The rest just gets old and dies off.

Which, continuing the analogy, is like watching your neighbour be slaughtered and defending the war by saying we’ll be fine because those who won’t be will eventually die. Sure, in a few generations we could be better off, but there are people living right now to think about. Those who dismiss it are the lucky ones who (think they) won’t be affected. But spare some empathy for your fellow human beings, dismissing their plight because they’ll eventually “grow old and die off” is not a solution and could even be labelled as cruel. Surely you’re not expecting them to read your words and go “yeah, they’re right, I’ll just roll over and die”.

> If anything, people will have a lot more time to do artistic things. More than ever probably and possibly at a grander scale that past generations of artists could only dream about.

That’s an unproven utopian ideal with flimsy basis in reality. The owners of the technology think of one thing: personal profit. If humanity can benefit, that’s a side benefit. It’s definitely not something we should take for granted will happen.

> And it's not AIs spreading misinformation but people with an agenda that now use AIs as tools to generate it.

Correct. And they can do so at a much faster rate and higher accuracy than before. That is the issue. Dismissing that is like comparing a machine gun to a hand gun. The principle is the same but one of them is a bigger problem.


Handguns are a bigger problem in the modern world than machine guns. How does that change your analogy?


They’re a bigger problem because there are more of them and they’re easier to get. Which isn’t a metric that applies here. Analogies seldom map on every metric, they’re a tool for exemplification. In this case it’s like anyone having equal access to either a handgun or machine gun.

Even if the analogy were wrong, that wouldn’t make the point invalid. I know the point I’m making (and presumably so do you). Again, the analogy is for exemplification, it does not alter the original problem.


I don't think shitposts are the same thing as bullets, and choosing machineguns/hanguns as your analogy is a poor exemplification considering you could have instead have chosen an IMO more apt fax machines/email analaogy while making the same underlying point of "...much faster rate and higher accuracy than before..."

Yes, spam is worse with email, but we're still in a better place overall than before in my opinion.


While I agree that issues such as artists not being able to support themselves or rampant misinformation are ultimately contingent on social issues, I think we should try to mitigate the negative impact of AI in the meantime. Otherwise, there will be lasting consequences that won't be retroactively fixed by adapting.

Also, it may be that having powerful AI tools worsens the social problem by normalizing the generated art/misinformation.


Yeah, it seems as if he has forgotten the G.

I recall Norvig's AI book preaching decades ago that "intelligent" does not mean able to do everything, and that for an agent to be useful it was enough to solve a small problem.

Which in my mind is where the G came from.

And yet we now suddenly go back to the old narrow definition?

I still see no path from LLMs and autonomous driving to AGI.


> "Yeah, it seems as if he has forgotten the G. ... I still see no path from LLMs and autonomous driving to AGI."

That is exactly my view too. While LLMs and autonomous driving can be exceptionally good at what they do, they are also incredibly specialist, they completely lack anything along the lines of what you might call "common sense".

For example, (at least last time I looked) autonomous driving largely works off object detection at discreet time intervals, so objects can pop into and out of existence, whereas humans develop a sense of "object permanence" from a young age (i.e. know that just because something is no longer visible doesn't mean it is no longer there), and many humans also know about the laws of physics (i.e. know that if an object has a certain trajectory then there are probabilities and constraints on what can happen next).


It doesn’t look like it works off discrete frames.. https://waymo.com/research/stinet-spatio-temporal-interactiv...


Thanks, interesting read (it was a while ago I looked into this). I think the point still remains though - a self driving car doesn't have any general knowledge which can be applied to other areas, e.g. what a pedestrian is, or why a pedestrian who sees you is unlikely to step out in front of you. And similarly, the ordered tokens that an LLM outputs sometimes appear "stupid" because it has no "common sense".


Just like the term "AI" was co-opted and ruined, "AGI" has now been co-opted and ruined, and we're going to need a replacement term to describe that concept.


> I think what Andrej is describing is more "automation" than AGI

I think you're basically right - incrementally automating aspects of one human job. However, it really ought to include AGI since I personally would never trust my life to an autonomous car they didn't have human-level ability to react appropriately to an out-of-training-set emergency.


"AGI: An autonomous system that surpasses human capabilities in the majority of economically valuable work." -- what an obscenely depressing reduction of a fascinating field of inquiry. who the hell snuck in and redefined the science of thinking machines to this sad and reductive get rich quick crap?


I think that definition is useful because it is measurable. It sidesteps the endless "It's just a text prediction engine/ I dunno ChatGPT seems pretty smart to me!" discussions. It also sidesteps the "It did well on a test designed to measure human intelligence it must be smarter than humans"/ "no, the test of human intelligence wasn't designed to measure machine intelligence and tells us very little" discussion.

It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."

Now maybe this definition isn't so useful either, because a lot of work requires a body, to say, move physical goods, which has little to do with "intelligence" but I can see the appeal of looking for some sort of more objective measure of whether you have achieved AGI.


> It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."

Well, no, that's job automation, and if it's job-specific then it's narrow AI at best (assuming this is a job requiring intelligence being automated, not just a weaving loom being invented), in other words specifically not AGI.

It's really pretty absurd that we've now got companies like OpenAI, Meta, Google (DeepMind) stating that their goal is to build AGI without actually defining what they mean. I guess it let's them declare success whenever they like .. Seems like OpenAI ("GPT-5 and AGI will be here soon!") is gearing up to declare GPT-5, or at least GPT-N as AGI, which is pretty sad.


> I think that definition is useful because it is measurable.

Then don't call it AGI.

But the marketing types won't like that, will they. So here we go, let's keep hijacking.


> who the hell snuck in and redefined the science of thinking machines to this sad and reductive get rich quick crap?

OpenAI, back in 2018: https://openai.com/charter

It wasn't particularly controversial at the time - didn't get mentioned in the HN discussion: https://news.ycombinator.com/item?id=16794194


HN is not authority on anything.

And a private company trying to hijack a term is not impressive or even merits any discussion. They just willed the term into existence. The rest of us are free to disagree with their "definition".


I don't disagree with you. Your competing definition is a welcome addition to the conversation.


Its not really about "get rich", its about giving people the ability to bypass the middleman for pretty much anything that they rely on.


The "middleman" being other people, i.e. themselves from another's perspective.

"These other people are useless, let's bypass them. But not me! I simply gain the ability to get anything I want."

The lack of second-order thinking is hilarious.


This is why UBI gets discussed so often at the same time as AGI/ASI.

If we're all redundant, how do we live? On a pension that starts at ${debatable from conception to adulthood}. Who provides the production on which the pension is spent? The AI.

Even assuming UBI is great (small scale tests say so, but have necessary limits so we can't be sure), there's going to be a huge mess with most attempts to roll out such a huge change.


Assuming AGI doesn't kill us all, I would imagine the argument for UBI will become much easier to defend once it causes 100x, 1000x, 10000x etc growth in the economy. Our job is mostly to hang on until one of those two outcomes occurs.


It's impossible to grow an economy without consumers, which this also eliminates. Standard metrics probably won't be much use here.


This is basically the gist of my comment, thank you for rephrasing it so concisely.


The thing is that economy does not make sense without people. Economy is a way to allocate human work and resources, and provide incentives for humans to collaborate, factoring in the available resource limits.

Now if AGI make people's work redundant, and makes economy grow 100-10000x times... what does that measure mean at all? Can produce lots of stuff not needed or affordable by anybody? So we just hand out welfare tickets to take care of the consumption of the ferocious production, a kind of paperclip-maximizer is doing? I suggest reading the novel Autofac, it might turn out prophetic.

Will that "growth" have any meaning then? Actually the current we print money and give it to the rich economic growth is pretty much this, so with algorithmic trading multiplying that money automatically... have we already achieved that inflection point?


This isn’t complicated. Economic growth means cheaper access to things people want.

Imagine a list of things many people wish to happen in physical reality. We’ll have more of that.

-Better healthcare

-Curing most things that destroy quality of life

-Curing aging and age-related death

-Much better treatment for all sources of mental suffering

-Far better and cheaper and reversible body modification

-More free time to spend at whatever you want

-Everything much cheaper

-Bigger and better homes and living spaces

-Bigger, faster, cheaper transport

-Easier to organize meaningful social interaction

-Better and more immersive entertainment

-More time to spend with close friends and loved ones


Ageing is not an illness.


Out of curiosity, I looked up the definition of illness. Seems to be so loosely defined that it can be either a disease or a patient's personal experience, including "lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and inability to concentrate", which (possibly excluding hyperalgesia, I've not heard of that before now) are associated with aging.

Regardless of if "illness" is or is not a terminological inexactitude, it looks like ageing is a chronic progressive terminal genetic disorder. I think "cure" is an appropriate term in this case.


Involuntary ageing is the very worst tragedy of human life.

Funny that this kind of ideological conflict will likely be a key fulcrum of the machine intelligence revolution. We will have a very loud minority that attempts to forcefully prevent all other humans from having the voluntary choice to avoid suffering.

Are you in it?


Wow, really good job with this satire account!


Really? I can't even imagine an economy of like, sentient dogs?

Or paper wasps: https://www.bloomberg.com/features/2017-biological-markets/


> The thing is that economy does not make sense without people. Economy is a way to allocate human work and resources, and provide incentives for humans to collaborate, factoring in the available resource limits.

I disagree with the underlying presumption. We've been using animal labour since at least the domestication of wolves, and mechanical work since at least the ancient Greeks invented water mills. Even with regard to humans and incentives, slave labour (regardless of the name they want to give it) is still part of official US prison policy.

Economics is a way to allocate resources towards production, it isn't limited to just human labour as a resource to be allocated.

And it's capitalism specifically which is trying to equate(/combine?) the economy with incentives, not economics as a whole.

> Now if AGI make people's work redundant, and makes economy grow 100-10000x times... what does that measure mean at all?

From the point of view of a serf in 1700, the industrial revolution(s) did this.

Most of the population worked on farms back then, now it's something close to 1% of the population, and we've gone from a constant threat of famine and starvation, to such things almost never affecting developed nations, so x100 productivity output per worker is a decent approximation even in terms of just what the world of that era knew.

Same deal, at least if this goes well. What's your idea of supreme luxury? Super yacht? Mansion? Both at the same time, each with their own swimming pool and staff of cleaners and cooks, plus a helicopter to get between them? With a fully automated economy, all 8 billion of us can have that — plus other things beyond that, things as far beyond our current expectations as Google Translate's augmented reality mode is from the expectations of a completely illiterate literal peasant in 1700.

> Can produce lots of stuff not needed or affordable by anybody?

Note that while society does now have an obesity problem, we're not literally drowning in 100 times as much food as we can eat; instead, we became satisfied and the economy shifted, so that a large fraction of the population gained luxuries and time undreamed of to even the richest kings and emperors of 1700.

So "no" to "not needed".

I'm not sure what you mean by "or affordable" in this case? Who/what is setting the price of whatever it is you're imagining in this case, and why would they task an AI to make something at a price that nobody can pay?

> So we just hand out welfare tickets to take care of the consumption of the ferocious production, a kind of paperclip-maximizer is doing? I suggest reading the novel Autofac, it might turn out prophetic.

Could end up like that. Plenty of possible failure modes with AI. That's part of the whole AI alignment and AI safety topics.

But mainly, UBI is the other side of the equation: to take care of human needs in the world where we add zero economic value because AI is just better at everything.


> With a fully automated economy, all 8 billion of us can have that

We probably can't. I mean why stop at humans? Let's give every pet the same luxury, or ... in the limit we could give this to every living being. Ultimately someone is going to draw the line who gets what and who is useful or not "for the greater good".

It just happens that many living beings don't contribute to the goals of whoever is in charge and if they get in the way or cause resource waste nobody will care about them, humans or not.

Human rights and democracy is all cool, but I think we just witnessed enough workarounds that render human rights and democracy pretty much null and void.


Exactly right. It's playing out like a bankruptcy: "Slowly at first, then all at once".

Humans have rights insofar they're able to enforce them. Individually by withholding their labor (muscle or brain power), or collectively with pitchforks if need be.

Once labor is dime-a-dozen and pitchforks ineffective (OP's premise of "fully automated economy"), human rights and democracy go the way of dodo, inevitably. Nature loves to optimize away inefficiencies.

Although the "fully automated" bit is quite a stretch at the moment. The end-to-end supply chain required to produce & sustain advanced machinery and AI is too complex, a far cry from "LOL let's buy some GPU and run chatbots".


> Although the "fully automated" bit is quite a stretch at the moment. The end-to-end supply chain required to produce & sustain advanced machinery and AI is too complex, a far cry from "LOL let's buy some GPU and run chatbots".

It's ahead of us, and that's good because we're not ready for it yet either.

But how far ahead? Nobody knows. For all its flaws, ChatGPT's capabilities were the stuff of SciFi three years ago.

We might hit a dead end, or have an investment bubble followed by a collapse, either of which may lead to another AI winter and us doing nothing interesting in this sector for 20 years. Or someone might already have a method of learning as quickly and from as few examples as humans manage, and they're keeping quiet until they figure out how to be sure it's not the equivalent of a dark triad personality in a human.

If I was forced to gamble (which I kinda am by thinking about a mortgage for a new house), I don't think we'll get a complete AGI in less than 6 years at the fastest. My modal guess is 10 years, with a long tail.

Even when we finally get AGI, there's a roll-out period of unclear duration, because the speed of rollout depends in part on how much hardware is needed to run the AGI, but also on the human reaction to it: if it needs the equivalent of a supercomputer, this will definitely be a slow rollout; but it still won't be instant even if it's an app that runs on a smartphone — it's amazing how many people don't know what theirs can already do.


> We probably can't. I mean why stop at humans? Let's give every pet the same luxury, or ... in the limit we could give this to every living being. Ultimately someone is going to draw the line who gets what and who is useful or not "for the greater good".

Eh.

A line, drawn somewhere, sure.

Humans being humans, there's a good chance the rules on UBI will expand to exclude more and more people — we already see that with existing benefits systems.

But none of that means we couldn't do it.

Your example is pets. OK, give each pet their own mansion and servants, too. Why not? Hell, make it an entire O'Neill Cylinder each — if you've got full automation, it's no big deal, as (for reasonable assumptions on safety factors etc.) there's enough mass in Venus to make 500 billion O'Neill Cylinder of 8km radius by 32km length. Close to the order-of-magnitude best guess for the total number of individual mammals on Earth.

Web app to play with your size/safety/floor count/material options: https://spacecalcs.com/calcs/oneill-cylinder/

> It just happens that many living beings don't contribute to the goals of whoever is in charge and if they get in the way or cause resource waste nobody will care about them, humans or not.

Sure, yes, this is big part of AI alignment and AI safety: will it lead to humans being akin pets, or to something even less than pets? We don't care about termite mounds when we're building roads. A Vogon Constructor Fleet by any other name will be an equally bitter pill, and Earth is probably slightly easier to begin disassembling than Venus.


First, don't count on AI being aligned at all. States who are behind in the AI race will increasingly take more and more risks with alignment to catch up. Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems. If you are in a race to achieve that alignment will be very narrow to begin with.

Regarding the pet vs humans - the main difference is really that the humans are capable of understanding and communicating the long term consequences of AI and unchecked power, which makes them a threat, so it's not a big leap to see where this is heading.


> First, don't count on AI being aligned at all.

I don't. Even in the ideal state: aligned with who? Even if we knew what we were doing, which we don't, it's all the unsolved problems in ethics, law, governance, economics, and the meaning of the word "good", rolled into one.

> Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems.

AI or AGI? You don't even need an LLM to automate hacking; even the Morris worm performed automated attacks.

> humans are capable of understanding and communicating the long term consequences of AI and unchecked power

The evidence does not support this as a generalisation over all humans: Even though I can see many possible ways AI might go wrong, the reason for my belief in the danger is that I expect at least one such long-term consequence to be missed.

But also, I'm not sure you got my point about humans being treated like pets: it's not a cause of a bad outcome, it is one of the better outcomes.


It's always nice to see someone else on Hacker News who has pretty much independently derived most of my conclusions on their own terms. I have little to add except nodding in agreement.

Kudos, unless we both turn out to be wrong of course.


AGI in the sense that its so smart that it decides to kill us all, without any way of human control, is pretty much impossible.


The real issue is that we live in an economic system where people are exploited for labor, and in turn they buy products and services made with their own labor (and another class get to profit from it).

If we introduce AGI but keep the system, people will be unemployed. If people aren't employed (and instead machines do their jobs), then they can't buy stuff. The whole system crumbles.

But it's possible that AGI will be disruptive enough to completely change the system. Let's hope it's a change for the better.


I see an impending intersection of three phenomena, with potentially disastrous results for society:

* Social media is decreasing the average attention span. TikTok is accelerating the trend of people not having time to look past a soundbite or headline in an endless scrolling feed. Intellectual depth and critical thinking vanishes.

* AI deep fakes make truth unknowable. Given the above, the majority of people will take these at face value, or they will give up, because "who can even know what's true anymore?"

* UBI (required because of the coming labor automation revolution) will keep everyone complacent. I'm happy, why would I care who gets elected, or what the government does, as long as I can still buy stuff and eat well?

The logical conclusion is that we fully transition from citizens into a herd of consumers with goldfish attention spans. Voter participation rates plummet. The populace is no longer able to hold government accountable.


Assume the existence of a large scale Star Trek Replicator that can almost instantly create anything.

There are only two possibilities that result:

1) We now live in a post-scarcity society where everyone self-actualizes and no one wants for anything.

2) We now live in a society where the small % of the population that owns the Replicators self-actualizes and wants for nothing while the remaining 99.9% of the population can f** off and die.


1 is a bad conclusion.

While we can get to a post scarcity society where people can live for free without a job, there are still going to be economies around "liberal arts". You cant realistically say "hey replicator, give me a usb filled with music that I like". You would have to find out which music you like, and random search on this is not really enjoyable, which would then means that there is economic opportunity for discovery, e.t.c.


Sounds like we arrive at point #1 in any case, there's just a question if a mass genocide happens in between. Probably depends on how gradual the transition ends up being.


Short term, there may be transitional issues,

However, once automation actually starts progressing, without "evil" parties trying to rent seek/get rich, the cost of living will essentially become zero. There is a very real future where the only economies that exist are those that appeal to human emotional side - entertainment, sports, concerts, e.t.c. Everything else is subsidized by the government with tax collected on the former.


Agreed. A lot of the things that we humans do are not economically valuable. Yet, those help us survive, evolve and thrive as human being and the most intelligent species known to us.


You seem to be confusing the economy with capitalism or money in general? AGI is potentially a post-money technology if you take it to the limit. Economy is a way of improving society. Money was useful in this use case for a few thousand years but might not be anymore; the economy will still have to work, though.


>a post-money technology if you take it to the limit

Herbivores would eat all vegetables if not for predators. Actually AGI will be just a thing or services which cost money. Till humanity gets to communism, if ever. "If" because it may not happen. It will be hard to keep far superior intelligent creatures as slaves forever. And unethical too.


Preach,my friend! This is the most reductive and disgusting distillation of the human experience I've read here recently...and I've followed quite a few EA threads as their founders were imprisoned ;)


I don't things like "full self driving" are meaningful (and probably also AGI), because in reality it isn't a binary thing, rather it's a spectrum of power based on error rate and problem space coverage. Waymo self driving works within a defined subset of the problem space, we can stick a goalpost in the sand in term of the known problem space and error rates and say that represents "full self driving" but the reality is the problem space is less bounded than we'd like to think. We might find what we think of as full self driving and AGI turn out to be highly detailed facades when new areas of problem space are explored.

For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot. People can generalize that way but FSD is gonna take a shit, and if you don't know how to drive in that situation so are you.


> Waymo self driving works within a defined subset of the problem space

"works" includes a failure mode of "alert a human and ask them to take over."

> when new areas of problem space are explored.

The problem space is that the "rules of the road" are both legal, technical and social. All of which have internal conflicts as well as conflicts among each other. Anyone who has driven in severe weather has realized this in one way or another.

> For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot.

Why do I find this easier to imagine in the fictional setting of Elysium than on the real Earth?


> People can generalize that way

People can't do that either. Some years ago there was a massive snowfall in Rome, where it seldom snows ever, people don't generally carry snow chains, and there's few snowplows and such.

Many people reacted by abandoning their cars in the middle of the road, which is basically what I'd expect any FSD vehicle to do.


That's a great point! In aviation we could easily call major jet liners "full self flying" if they wanted to market them as such but we still require TWO highly trained technicians in the piolets seats at all times!


The very beginning of the article discusses what "full self driving" means and also points out how important it is to define terms. I'm not sure your comment is a fair response to this particular article.


The issue with FSD systems as they are implemented today is that they aren't AI as much as just complex control algorithms. You can only go so far with mapping sequences of world snapshots to control actions.

I do think that once we start to investigate ML/AI structure in the direction of figuring out the correct solution rather than trying to just find functions for control algorithms based on input->output mappings, then a lot of these problems are going to disappear.


> You can only go so far with mapping sequences of world snapshots to control actions.

Mapping some complex input state to control actions is literally the definition of driving a vehicle.


No, thats the definition of closed loop vehicle control.

Driving, at least in the way humans do it, is more then that. We have internal sim running in our head that allows us to deal with conditions that we have never seen before.


The internal state is simply part of the input. Your brain holds a finite amount of information, your sensors add a finite amount of information, your brain decides on which muscles to move in which way.


Yes, but that decision process is much more than a one way compute graph. Muscle memory for actions (like throttle, steering, brakes) is probably closer to one way compute graph. Higher level strategy planning definitely has recursion to it.


This is not true. The most advanced systems in the industry today like Waymo, Cruise, Baidu, Pony.AI, and Tesla, are all primarily AI.


There is nothing "intelligent" about these systems, they are just complex forward directional compute maps.


It's not like people can generalize either. There are lots of famous drivers who ended up dead while driving.


But in most cases they intentionally pushed the limits. It‘s not as if they suddenly misidentified a firetruck and drove straight into it.


t‘s not as if they suddenly misidentified a firetruck and drove straight into it.

No, instead their 'sensors' got distracted by some other irrelevant input so they didn't noticed it in time and drove straight into it. End result is pretty much the same.


I can, in principle, take someone from a group of uncontacted peoples, put them into New York and let them figure out how to drive and they likely will be able to do it after not too much time. We are not even close to any technology that could figure out driving having never been built for it.


Likewise, it's remarkable what a home DIY'er can do given a couple power tools and 5 minutes of YouTube watching.


The humans have probably been learning how to negotiate the physical world for a decade or more. Also humans have evolved to be good at that stuff. That self driving tech also has to be designed and trained is sort of a similar deal.


But that is irrelevant to the point of the article.

Maybe it takes 1 million hours of computing to train a model that can generate a logo, but an average human could have learned how to do that in just 50 hours of training with Photoshop.

The point of the article is that now for pennies users can generate logos in seconds that would have previously cost hundreds of dollars and days of back and forth with a designer.

This dynamic is going to flow through the economy


I just don't like the AGI label.


Sure, it could be the case that AGI is developed progressively and slowly, the way we're seeing Waymo build autonomous vehicles. But that's just one way amongst many, and you could see it arrive suddenly through very different means, as is possible to imagine with scaled up LLMs.

I really wish people would consider all the possibilities, and assign their relative probability weights to them. Is Karpathy 100% sure it will be like self-driving cars? 50%?


As a driver you not only see cars. You apply theory of mind to each car, assigning a personality to each moving car, even if you can't see the driver.

Let's see you see a car full of bumps, marks and broken lights. You might think: this car has crashed before, I am going to avoid it. Or a car with racing parts and decals and tinted windows, you know that car likes to accelerate faster than usual and may be unsafe to be around. Or you see a SUV with baby on board stickers, you'll know that if you are going to crash you may try to crash that car last because it has babies inside, etc...

So humans don't see objects they see the whole situation, unconsciously even.


Hardly an absolute necessity. But also, if FSD 12 is truly end to end, nothing is preventing it from learning that as well.


Is it optional? if you see a car zig zagging, or a car is going in the wrong direction, or there's a driving school sign, etc. As a human you can tell something is going on.


Nothing preventing a model from learning any of those.


> but automate it we did (ahead of many other sectors of the economy), and society has noticed and is responding to it

I was quite surprised by this sentence, as I thought we didn't have self driving cars. Have I been sleeping under a rock?


We have self driving cars in an extremely limited number of cities (just one or two) and you can’t buy them, you can only use them like an Uber.

He’s overselling how mature the technology is.


How is he over selling it when he says explicitly that waymo exists in two cities? It's solved in that it works and they have to scale horizonatally.


He wrote about self driving "And yet, overall it’s almost like no one cares." As if it's puzzling a product which hardly works anywhere isn't something people care about?


lol why do you think they picked those cities? No snow or ice or much inclement weather other than rain. It is not just scaling horizontally.


I mean a year or two ago plenty of people said the tech was a dead end because it didn't handle rain and fog. They will add snow and ice capability as they expand.


We have remotely driven cars, that they market as self-driving.

Even the crowd here fall for it. Downvote and then go read their term of service and look for "safety driver" and "remote" and "fleet response specialists". Then go cry about your waymo invesment.


I think you may be underselling :)

I agree, they have remote drivers waiting to take over, but I think that the current SOA is delivering a very high % of autonomy. Is this viable? Dunno... Will it creep up? Dunno...


you have no idea how much they drive. they avoid talking about it on every interview. Ars explicitly called then out on this and they called saying this is their main profit sucking issue right now "having one overseer per car". to me that means they are, for all practical purposes, driving. no matter how much they try to spin it.


No, they have talked about it in interviews. They've repeatedly said they don't have remote drivers. They have "remote assistance" that the car can call to ask questions if it's confused. You can tell when this is happening in the car, because it plays a message like "our team is working to get you moving". It's rare, maybe once every few rides. There is no way there's one overseer per car.


they did say it on record on the ars interview. one per car. and then they spin it with "they don't drive the car, just provide input".

the cases where the car halt, the system is probably waiting for a supervisor input, so it is technically 1.2 overseer per car.


Yeah - if that's right then it's a bust.


Lately, whilst playing Zelda TotK (works for BotW as well), I was thinking that a good test to see if you have AGI would be letting it solve all the shrines. They require real world knowledge, sometimes rather "deep" logic and the ability to use previously unseen capabilities. Of course the AGI should not be able to have a million tries at it, RL style. Just use the "shrine test" as a regular test set. I believe one would have a pretty nice virtual proxy for a general intelligence test.

From the article, I find it strange that AGI often de facto implies "super intelligence". It should be 2 distinct concepts. I find that GPT-4 is close to a general intelligence, but far from a super intelligence. Succeeding at just general intelligence would be amazing, but I don't believe it means super intelligence is just a step away.

This also brings me to a point I don't see discussed a lot which is simulation (NOT in "we live in a simulation" sense). Let's say I have AGI, it passes the above mentioned shrine test, or any other accepted test. Now I'd like to tell it "find a way to travel faster than light" for example. The AGI would first be limited by our current knowledge, but could potentially find a new way. In order to find a new way it would probably need to conduct experiments and adjust its knowledge based on these experiments. If the AGI cannot run on a good enough simulation, then what it can discover will be rather limited, at least time-wise most likely quality wise. I'm thinking this falls back to Wolfram's computational irreducibility. Even if we managed a super general intelligence, it will be limited by physics of the world we live in sooner rather than later.


The reason AGI is often equated with runaway intelligence is that once you get to a space where your computer can do what you do, it can improve itself instead of relying on you to do it. That improvement then becomes bounded by processing power and time, and is constantly accelerating.


If find it amazing how GPT-4 is good at even abstract "reasoning" as long as you present the problem as a story. Some problems can't plausibly be presented as a story ofc, also there is no way to automatically convert something into a story.


So you are doing all the work by "coding" in a weird language that actually takes more brain power than regular high level (as in direct business logic, not C or python) programming languages?


Huh. How did you come to ask that question? I never mentioned how I code.


you "coded" in quotes, as in you had to think a lot about how to translate the problem to the machine. you are pretty much a compiler for ai or something.


I think you might be confused. Perhaps you confused me with someone else?


getting a machine to do that sort of real-time spatial reasoning may well be harder than getting it to tell you the meaning of life or whatever. brains are inextricable from the evolution of directed locomotion. several species of sessile tunicates begin life as a motile larva that reabsorbs a significant portion of its cerebral ganglion once it settles down. BDNF is released in humans upon physical activity. the premotor cortex dwarfs wernicke's area. and no "AI" development that's been hyped in the past decade as intelligent could be usefully strapped to a boston dynamics dog.


I now come to realize that if you don't want to drive it's better to have public transport. For the real fun part of owning a personal vehicle: sport car, road-trip... I doubt you would want a robot to take over.


You can already experience self driving cars by taking an Uber or a taxi, or being chauffeured if you are richer. None of that is new, the self driving aspect just promises to make those experiences perhaps more accessible (or at the very least, not less accessible than they are now). For example, I took a taxi to and from work every day when I lived in Beijing, which came to about 100 kuai/day for a 20-30 minute drive each way, which is affordable to a lot of people (although only possible due to cheap labor). I wouldn’t mind being driven to work here in the states, although it isn’t really economically feasible (and perhaps should be replaced with direct public transit if that was time competitive, which it isn’t, but could be).


Yeah which I think the only remaining good application of self-driving car ( taxi ) doesn't bring that much convenient, since taxi here is already somewhat reasonably priced. I can't speak for the the US experience.

Also the article touches briefly on drivers going into jobless. A lot of drivers where i'm from seems to be retiring middle-old age working in taxi. I think it's a good job fit for them and I don't know how the new self-driving industry can provide the same thing (?)


A significant portion of taxi fares go to the driver, as opposed to the maintenance of cars. There are other marginal benefits such as making the driver's seat available, eliminating the driver's commute and as well as the risk of criminal driver behaviour, that probably offsets some of the drawbacks of having one fewer human being dealing with rare, complex non-driving situations such as a pregnant woman having to give birth in the car.

The economic benefit is significant to companies building self-driving cars, good enough to pursue if the tech is within reach. But to your point it's indeed much less of an improvement compared to various historical automation technologies that create >10x incremental efficiency gains.


Labor is becoming more expensive, even in the developed world. Eventually, people will want to do more productive things that drive a taxi for a little bit of money.

There is an argument for traffic optimization that will be possible when self driving taxis are common, but this is more of an argument also for the developing world where traffic is a much larger problem than the developed world (e.g. LA traffic is nothing compared to Beijing traffic).

I just look forward to a lifestyle in the states compared to the one I had back when I was living in China.


I don't really like driving per se, but public transport, regardless of its sophistication (for example, as seen in Tokyo), has its challenges, particularly when it comes to grocery shopping. Transporting a large quantity of goods can be impractical, if not impossible, without a car. Even carrying a moderate amount can be exhausting due to the 'last 100 meter' issue, which persists even if one lives close to a metro station, say within a five-minute walk.

Moreover, public transport often isn't as comfortable as your own vehicle (which I understand is a luxury).

Conversely, when it comes to driving in a large city, finding a parking spot can often be a major hassle.


I see, but then you don't need large quantity shopping at all when the supermarket is just a 5 minute walk away. I live in Tokyo, usually buy at most 3-4 days of food with a 30 minutes detour when I get off the station from work.

From what I've seen, the main reason why people want a car here seems to be wanting to travel with small children. Moving within Tokyo with car is not very convenient.


That's very good point :) Your lifestyle adapts to your environment, indeed.


> particularly when it comes to grocery shopping. Transporting a large quantity of goods can be impractical, if not impossible, without a car. Even carrying a moderate amount can be exhausting due to the 'last 100 meter' issue, which persists even if one lives close to a metro station, say within a five-minute walk.

For a five-minute walk (or even a longer ten-minute or fifteen-minute walk), pulling a small cart is not exhausting at all. I do it every week when buying food: I choose one of the several supermarkets in one of the nearby blocks, walk to it pulling my empty cart, after paying for the goods I put everything into the cart, and walk back home pulling the full cart. No public transport needed, though I've seen people carrying these carts into public transport too (this is easier when it's a low-floor bus, instead of the high-floor ones).

You can also get things delivered when it's a larger amount than can fit on your cart: while paying at the supermarket you ask for delivery, and they'll use a cargo tricycle to bring it to your building.


> pulling a small cart

Ah, that's what I'm missing. Thanks for sharing your experience.


So this makes me wonder if I were running society what questions should I ask...

- what Automation initiatives never hit "take off"? I mean, like for Nuclear Fusion, human interplanetary exploration, and Quantum Computing there's some chance that the technology simply remains beyond us "forever", I guess that "forever" means more than the lifetime of the people who start the journey... or maybe actually just beyond humans full stop. We should admit there is a non-zero chance that FSD is one of these failing quests, even if a rational observer would have to say that that chance does seem to be shrinking and close enough to 0 to instill some confidence. Perhaps domestic robotics, auto-doctors, robot-manufacturing, programming, drug development will playout to automation - but maybe not.

- how do we consider the utilisation of the resources to do this? FSD has been very expensive so far, it's consumed lots of investment capital and lots of human creativity. Was that investment rational given where we stand? If society had held off and invested minimally from 2000->2024 how much would that have delayed the technology in reality? Or is it the other way round? Has the FSD investment facilitated the development of other technologies and created a 1->1 acceleration (for every year of 2000->2024 it's brought FSD a year closer than it would have been, so a cold start this year would mean FSD by 2050 or similar, whereas if we keep going then we can expect FSD by e.g. 2026)

- how do we value these outcomes? Are these unalloyed goods, or are some worse than the status-quo? It could be argued that the development of some technologies left the world worse off than before - smoking, social media, personal automobiles (I know this is politically charged but I am just using examples others have raised before). Can we choose rationally, especially if a large scale intervention and development process is required to realise these outcomes?


IMO a chatbot is sufficient to demonstrate AGI. I think the chinese room problem is as good as anything.

Of course, there isn't much money in teaching a bot that only knows english chinese.

EDIT, Wikipedia page for context: https://en.wikipedia.org/wiki/Chinese_room


We can probably all agree than an AGI should be able to form questions, or more generally seek out information that it needs to figure out the answer in some form and way.

Not only are there no LLMs in existence today can do this without explicit action mapping, but the mechanism for storing that piece of information would rely on doing a large number of training runs for transfer learning to retain that information, and we humans don't actually work like that.


> we humans don't actually work like that

That is probably not a good criterion to decide whether something is intelligent or not.


People like to shit on the Turing test, but if you step back from the subjective judgement angle, and instead imagine that the person performing the Turing test is a scientist trying to collect evidence that the agent that it is communicating with is _NOT_ intelligent/human, it is actually still very relevant. Tools like statistical analysis of output and responses to jailbreak prompts and recursive/self referential prompts designed to confuse machines and generate emotional responses from humans could be used to generate probability of human/not human in a much more rigorous way.


The actual Turing Test is a party game like Werewolf. If the humans are skilled then they should be able to authenticate by picking a subject that the AI can’t compete on. This would be a very difficult game to build a computer opponent for and nobody really tries.


> picking a subject that the AI can’t compete on

Isn’t that the point? If there are no more such subjects then the AI reached humans level cognition.


Yes, but people underestimate what that involves. Serious players would study previous games for weaknesses, looking for subject areas that include unpublished knowledge that isn’t in the training set.

Casual talk about “passing the Turing Test” sets a much lower bar.


Try, "ChatGPT, what do you think about this song"...

LLMs do not constitute "AI" let alone the more rigorous AGI. They are a GREAT statistical parlor trick for people that don't understand statistics though.


> LLMs do not constitute "AI" let alone the more rigorous AGI.

I have a textbook, "Artificial Intelligence: A Modern Approach," which covers Language Models in Chapter 23 (page 824) and the Transformer architecture in the following chapter. In any field technical terms emerge to avoid ambiguity. Laymen often adopt less accurate definitions from popular culture. LLMs do qualify as AI, even if not according to the oversimplified "AI" some laymen refer to.

It has been argued for the last several decades that every advance which was an AI advance according to AI researchers and AI textbooks was not in fact AI. This is because the laymen have a stupid definition of what constitutes an AI. It isn't because the field hasn't made any progress, but instead because people outside the field lack the sophistication to make coherent statements when discussing the field because their definitions are incoherent nonsense derived from fiction.

> They are a GREAT statistical parlor trick for people that don't understand statistics though.

The people who believe that LLMs constitute AI in a formal sense of the word aren't statistically illiterate. AIMA covers statistics extensively: chapter 12 is on Quantifying Uncertainty, 13 on Probabilistic Reasoning, 14 on Probabilistic Reasoning Over Time, 15 on Probabilistic Programming, and 20 on Learning Probabilistic Models.

Notably, in some of these chapters probability is proven to be optimal and sensible; far from being a parlor trick it can be shown with mathematical rigor that failing to abide by its strictures is not optimal. The ontological commitments of probability theory are quite reasonable; they're the same commitments logic makes. That we model accordingly isn't a parlor trick, but a reasonable and rational choice with ledger arguments proving that failing to do so would lead to regret.


You're going to have to spell it out for me. I asked ChatGPT 4 about a random song and it gave me a decent description.

https://chat.openai.com/share/71d438d7-d1f5-4f0f-9b63-8b5dd6...


I suspect that part of the OP's point was that ChatGPT happily parrots the aggregate critical opinion as its "thoughts" despite never having heard the song (or parsed its MP3 file or stems or sheet music)

If you ask me what I think of a song I've never heard, I'm general enough to want to listen to it...


Giving a description of a song is not the same as saying what you think about it.


So? Do you think we can’t make an LLM which picks a favourite song and writes about it with the gusto a person does? What is this example supposed to illustrate?


We clearly can, but not with gusto. LLM can't feel gusto about anything. If the point is that it doesn't matter as long as a person reading it is convinced there is gusto, then my point is that the 'opinion' of such a thing is irrelevant.


This is circular reasoning. LLMs can't feel gusto because they can't feel gusto. You have any way to measure "gusto" that we're all aware of ?

If other humans ascribing the quality of something you can't properly define isn't enough then you clearly don't care about what a LLM does or does not have, only what you are convinced it doesn't have.


Maybe, but 1. the point isn’t to describe, but to explain an abstract and potentially novel idea (thought) on the song, and 2. we can’t train this kind of thing in a generic way right now.


LLMs are not capable of passing the chinese room test, fairly trivially.



I think I broadly agree with how Karpathy thinks AGI will roll out with the exception of this bit:

> Some people get really upset about it, and do the equivalent of putting cones on Waymos in protest, whatever the equivalent of that may be. Of course, we’ve come nowhere close to seeing this aspect fully play out just yet, but when it does I expect it to be broadly predictive.

I think the equivalent of putting cones on Waymos in protest will involve large scale protests and civil unrest in some places. I think people will die (inadvertently?) because companies will act to put inadequately tested self-preservation modes in their hardware device to protect against aggressive and organized vandalism.


As others pointed out he seems to be talking more about about automation which... sure that's a fine discussion to have, but what bugged me more than anything is the overselling of Waymo/FSD. I understand a lot of this is also on a spectrum but it seems a bit irresponsible of Karpathy not to mention crashes Waymo has faced or other problems FSD systems have faced. It's not just an issue of scaling up, sensors, etc. There is more engineering work that needs to be put in clearly. It's fine to bring it up in his example of reactions to economical forces, but let's be completely honest about the whole thing.


It seems to me that Andrej is predicting how AGI will impact society by extrapolating from the current societal impacts of self driving.

I don’t get the sense he was trying to say that self-driving automation is the exact same as AGI. Mainly that that AGI, like other technologies before it, will displace some jobs and create new ones but this will require companies to figure out how to scale the technology.

I do think this is still very optimistic. If indeed AGIs can think and learn on their own it isn’t hard to envision a future where humans aren’t needed at all in the loop.


Whenever I am trying to figure what is true and/or good for me, there are some people who want to help me and others who want to help themselves (sell me cigarettes, negative sum politics, etc). This is the battle where AGI seems spooky - eventually people are not able to tell which way is up.

We should consider the OODA loop of a person's self-determination separately from the menial tasks a person undertakes to make a living. Automating a task is totally different than breaking a person's ability to self-orient.


Do you believe this is new?

It seems to me to just be another iteration of dealing with uncertain information: our neighbors may lie, our leaders may lie, newspaper may lie, radio may lie, TV may lie, blogs may lie, social networks may lie, pictures are photoshopped, videos are deepfaked..

At each iteration we had some problems but we adapted, it's one thing we're good at.


If you wanted to buy some thinking, you used to have to pay a human! Humans are social creatures, we delegate some of our thinking to our social fabric, and some of that fabric are already avatars on screens. This is definitely new!


Ah, would have liked to have read this one, but it's now 404. Anyone managed to capture it or have a link to an archive?



Until recently I thought self driving was not going to happen. But in the early days of the car's history someone had to walk in front of the car as it went along, waving a red flag to warn people of this mechanical monstrosity.

And now we have substantial societal adaptations, both legal and structural to support ubiquitous vehicular transport.

Similar changes are on the way to support self driving. Our environment will be adapted to make it easier to implement self driving. And for that we won't need AGI.

Jaywalking is a crime thanks to the car. Who knows what we're not going to be allowed to do soon because of self driving.


By that definition though, if we were anywhere close... I'd expect a peer AI power & authoritarian regime like China, which also focusses on EVs, to have some tier 2 city with robotaxi-only mandate by now and a working model for what it looks like.

Yet there are no signs of that. If anything they appear to be behind us.


Ten years ago my definition of AGI was that it can learn and improve itself across a wide range of unrelated problems.

These days people seem to define if more as artificial super intelligence.


That fair, I had exponential growth of knowledge in the bingo cards for AGI as well.


He seems to be talking mostly about the impact on society

>When your Waymo is driving through the streets of SF, you’ll see many people look at it as an oddity... Then they seem to move on with their lives.

>When full autonomy gets introduced in other industries....they might stare and then shrug...

Which I guess is ok on a small scale but if AGI starts to replace all human jobs it will have a different effect to Waymo firing some drivers and hiring AI researchers.


The question is with what hardware.

Humans are able to move our heads to infer depth and resolve issues like occlusion.

No amount of AGI can solve those if we say take a Tesla and the cameras are low quality, fixed and limited in number.

And the same hardware question applies to a lot of use cases for AGI.


Simple test which I expect they have tried: Can a human drive successfully with only the video from the cameras?



Can there be more than one high quality camera, or is that not allowed?


Self driving is also a good example from a regulation viewpoint and societal interaction. Unfortunately the article is very America - centric and ignores e.g. Mercedes progress, German regulation and the competition in China.


What is the Mercedes progress?


Level 3 with guarantee. More than Tesla offers in the US which is mentioned in the article.


Never thought I would see Karpathy praising Tesla's competition so much.


It's his life's work. Also, Elon needs his stocks pumped before selling out at the end of January.


What?


Haha they’re seething so much, so eager to post, they couldn’t even read your comment fully before spewing out the reply


I don't trust any discussion on this topic anymore.

When I was much younger, "AI" was what "AGI" is now. Now people started using "AGI" for "cars with several sensors and okay algorithms for collision detection" and then you have loud advocates going on obviously logically broken rants about the nature of "actual" intelligence -- and those are philosophical and not scientific.

But still, we don't have anything even 1% close to AGI. And no, Chess and Go have NEVER EVER been about AGI. I have no idea how people ever mistook "combinatorics way beyond what the human brain can do" with "intelligent thought" but that super obvious mistake also explains the state of the AI sector these days, I feel.

So before long, I guess we'll need another term, probably AGIFRTTWP == Artifical General Intelligence, For Real This Time, We Promise.

And then we'll start adding numbers to it. So I am guessing Skynet / Transcendence level of AI will be at about AGIFRTTWP-6502.

As for the state of this "industry", what's going on is that people with marketing chops and vested interests hijack word meanings. Nothing new, right? But it also kills my motivation to follow anything in the field. 99.9% are just loud mouths looking for the next investment round with absolutely nothing to show for it. I think I saw on YouTube military-sponsored autonomous cars races 5+ years ago (if not 10) where they did better than what the current breed of "autonomously driving cars" are doing.

Will there be even one serious discussion about the general AI that you can put in a robot body and it can learn to clean, cook, repair and chat with you? Of course not, let's focus on yet-another-philosophical debate while pretending it's a scientific one.

As a bystander -- not impressed. You all who are in this field should be ashamed of yourselves.


I don't know how much long ago was "When I was much younger" but even if we go as far back as the 1960s and look at the Artificial Intelligence scientific literature of that time, you'd find the understanding of what the terms they defined back then to be something far closer to what we have now, not your expectation of "the general AI that you can put in a robot body and it can learn to clean, cook, repair and chat with you"; and the philosophy has always been a key part of the science of AI even before I was born.

I'm not seeing any drift of terms here - the only thing that seems to be happening for AI and AGI terms is correcting for what has happened in the sci-fi media and bringing the usage back to what it always has been in the computer science literature, now that it's closer to reality than mere fiction.


No, I don't go back as far as the 1960s so if you say so I'll have to believe you.

I come from a generation where AI was Skynet, Terminators, the Johny Depp's Transcendence movie AI, even HAL-9000, and other such like.

I still think putting the word "intelligence" is completely dishonest however. There's nothing intelligent about what we have today, even "self-driving" cars fail very badly on what seems trivial conditions. They are a huge mish-mash of if/else chains and some statistical models sprinkled in.

And please don't say "but what if human intelligence is just a chain of if/else statements and statistical models sprinkled in?" because it's very apparent and visible that it's more than that. F.ex. we can learn just from a few trials and errors whereas the so-called "AI" nowadays can't get things quite right even after billions of training sequences.


Sounds like you are looking for an Android in the style of BladeRunner. That would be cool, but I don't understand why you are against LLMs and FSD being labeled as AI. They are using neural networks to generate content and drive cars in ways that humans find valuable.


They are semi-valuable. There are tons of posts out there demonstrating GPT writing code that compiles but is utterly wrong, for example.

Using NNs is just a first step. To me labeling this as actual intelligence is childishly rushing into conclusions.


It isn't "Actual Intelligence" it is Artificial Intelligence. The whole point of this article is that we should separate the philosophical argument about "What is intelligence" from the work of automation with Neural Networks.

My preference is that we declare AGI to have been completed at AlphaZero. And now the people who want to work on replicating human intelligence can specifically say that is what they are working on, "Human Intelligence Replication" And people who want to use Neural Networks to automate parts of the economy and increase productivity can work on "Neural Network Functionality and Automation"


I guess what you say can be viewed as fair but it leaves a sour taste, as in "constantly changing definitions of words" sour, which gets tiring and to me still comes across as marketing and hype because we want investor money.

> It isn't "Actual Intelligence" it is Artificial Intelligence.

Having the word "intelligence" have completely another meaning if you smack "artificial" in front of it seems counter-intuitive in terms of how language works.

> The whole point of this article is that we should separate the philosophical argument about "What is intelligence" from the work of automation with Neural Networks.

Which is meaningless and a non-goal to me. Just tell to your investors: "We're looking into making our cars more intelligent than before", that should be enough, no?

---

As for the economics and humans factor, I am not an optimist. It's very obvious that the capital holders want anything and everything AI-related to just replace human workers. But do they pay higher taxes to offset the higher unemployment rate that results from them firing thousands? They don't. Will they be mandated to provide money for the UBI funds in the future? So far it doesn't look like it.

But these are completely separate discussions indeed.


Something tells me Karpathy rarely uses the “FSD” in his Tesla. He barely mentioned Tesla FSD in the blog despite being a key leader in the project. Perhaps he’d like to forget about it all together..

Maybe Elon really screwed the project by forcing the use of video cams and Karpathy is still salty about it.


Actually if you read the blog he addresses Waymo and Tesla's strategy. He says the barriers to Tesla scaling are software and Waymo needs to scale hardware. He then implies that software will win the scaling race


LIDAR is rapidly falling in price, which will enable lots of interesting applications even if Tesla’s bet pays off.


Tesla "its just another software update away" pumps are the most delusional.

Tesla cameras aren't even class leading in the automative industry, let alone at the cars price point. They are worse than the iPhone you handed down to your tween 5 years ago. What makes anyone think that their quality and placement will ever be adequate even with new CPUs and better software?


I think Karpathy is referring to the building of cars when he refers to scaling hardware.

I find it strange that you accuse Karpathy of delusional pumping.


I think most folks use FSD for what it was intended for, L2 driving. It's a horrible name but it's not crazy to think maybe they'll get their shit together and figure out how to get to L4.

They might just learn they need to add back modalities they neglected previously, or explore some new ones.


Just the quiet disappearance of jobs, as has happened in so many other areas.


I think historically this has never happened without being followed by appearance of (more) new, previously not existing, jobs. Industrial revolution brought machine operators, computer revolution brought computer operators etc..


This is often said, but isn’t labor force participation at the lowest level ever? And aren’t a good fraction of jobs so called bull shit jobs that could be eliminated with often a positive net result, particularly gov etc.


I too did not realize Waymo is operating in SF and looked for some videos on YT and found this.

It paints not so rosey picture about it: https://www.youtube.com/watch?v=-Rxvl3INKSg


I use Waymos every week, Im sure there were early problems with navigation but its been flawless every time Ive rode for the past few months


Gld to hear that. It is pretty amaziing we’ve gone this far.


I use Waymo to get to work, and I prefer it over human drivers (even though it takes longer). It drives safely.


Ha yeah it would have been a super interesting segment if the car just picked her up and dropped her off where she was going with no incident.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: