Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Huh. That feels like kind of a weak definition.

That makes me wonder what kinds of work aren’t economically valuable? Would that be services generally provided by government?





Maybe I'm biased, but I actually think it's a pretty good definition, as definitions go. All of our narrow measures of human intelligence that we might be tempted to use - win at games, solve math problems, ace academic tests, dominate at programming competitions - are revealed as woefully insufficient as soon as an AI beats them but fails to generalize far beyond. But if you have an AI that can generate lots of revenue doing a wide variety of real work, then you've probably built something smart. Diverse revenue is a great metric.

I also find it interesting that the definition always includes the "outperforms humans" qualifier. Maybe our first AGIs will underperform humans.

Imagine I built a robot dog that behaved just like a biological dog. It bonds with people, can be trained, shows emotion, communicates, likes to play, likes to work, solves problems, understands social cues, and is loyal. IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.


> IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.

I'm not sure it would, though. The "G" in AGI stands for "General", which a dog obviously can't showcase. The comparison must be done against humans, since the goal is to ultimately have the system perform human tasks.

The definition mentioned by tedsanders seems adequate to me. Most of the terms are fuzzy ("most", "outperform"), but limiting the criteria to economic value narrows it down to a measurable metric. Of course, this could be exploited by building a system that optimizes for financial gain over everything else, but this wouldn't be acceptable.

The actual definition is not that important, IMO. AGI, if it happens, won't appear suddenly from a singular event, but as a gradual process until it becomes widely accepted that we have reached it. The impact on society and our lives would be impossible to ignore at that point. The problem with this is that along the way there will be charlatans and grifters shouting from the rooftops that they've already cracked it, but this is nothing new.


> The "G" in AGI stands for "General", which a dog obviously can't showcase.

That isn't obvious to me at all. If you don't like the dog analogy, lets try another: Does a human toddler qualify as having general intelligence?


Hhmm good point.

I would say... yes. But with the strong caveat that when used within the context of AGI, the individual/system should be able to showcase that intelligence, and the results should be comparable to those of a neurotypical adult human. Both a dog and a toddler can show signs of intelligence when compared to individuals of their similar nature, but not to an adult human, which is the criteria for AGI.

This is why I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI. It could certainly be considered a step in that direction, but not strictly AGI.

But again, I don't think that a strict definition of AGI is helpful or necessary. The impact of a system with such capabilities would be impossible to deny, so a clear definition doesn't really matter.


> I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI

What makes you say that it underperforms? I ask because evidence strongly suggests that it is actually vice-versa - AI models currently outperform humans in most of the tasks.


> Would that be services generally provided by government?

Most services provided by governments are economically valuable, as they provide infrastructure that allow individual actors to perform better, increasing collective economic output. (For e.g. high-expenditure infrastructure it could be quite easily argued though that they are not economically profitable.)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: