Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

meh. "intelligence" and "artificial intelligence" are marketing terms. According to Nils Nilsson, McCarthy and Minsky picked the term so the Dartmouth Summer Research Project on Artificial Intelligence would not be constrained by associations with "Cybernetics." McCarthy thought it (Cybernetics) too constrained because a: it traditionally focused on analog solutions and b: he thought Norbert Wiener was a bit pushy.

I asked Minsky about this a couple decades ago and it was the only time he didn't tell me I was asking a stupid question. Apparently, after the conference, the name stuck because they found it easier to get research grants with the more "neutral" sounding name.

I wonder how many people would trust "AI" if it was called "distributed indexing" or "stochastic generation."



All that matters is if they are effective terms or not. That's most of the point of the article; how to constructively inquire about whether giant LLMs might be effective at solving novel problems, perhaps better than humans can, in ways that matter to humans.

Since we don't have a formal model of how systems like humans work we can't just plug numbers into equations or generate proofs that LLMs and other ML systems will or will not be able to control other systems better than humans, so we revert to less-formal concepts as best as we can. The formal AI safety folks (MIRI et al) already tried the formal approach; by their own admission it's hopeless and they poured a lot of time and bright minds into it and we won't be able to formally answer questions about intelligence via decision theory or alignment before we have actually-intelligent systems interacting with us. What will happen at that point? We don't know and can only use imprecise concepts to make educated guesses. If we don't make honest attempts at this we run the risk of being blindsided by unexpectedly capable AI/ML systems.

This is why a large number of AI alignment researchers recommend pausing AI/ML systems at their current ability before they have a chance at being superhuman on most tasks. I wouldn't trust "distributed indexing" or "stochastic generation" if it could beat me at any particular skill of my choosing like similar systems can beat any of us at Chess or Go or Poker.


Intelligence has a pretty clear meaning. Of course you will have a hard time nailing down a definition but that's also true for simple words like "chair" or "car".

Artificial intelligence is also a pretty good name for a pretty clear concept: intelligence that has been artificially created. Any new ambiguity here comes from the ambiguity inherent in the word "artificial". Artifivial intelligence has a large range from if/else, linear regression and decision trees over ML methods to god-like superintelligence. That is not a problem, I think it's good that we have a term for this. I don't like when people try to reduce artificial intelligence to a smaller subset of these things. We should certainly come up with more precise terms in addition, but I think artificial intelligence is a very useful terminology and concept, not a marketing term.


> Any new ambiguity here comes from the ambiguity inherent in the word "artificial".

Not sure I agree here. Seems to me the ambiguity comes from the word "intelligence". As in, does that if/else clause indicate "intelligence" that's been programmed in. Of course we've created more terms like Artificial General Intelligence[1] to try to clear up the muddy definition of what constitutes intelligence.

I think our fundamental challenge as society begins to try to understand what AI is comes down to what "intelligence" is. As you said, we all know what it is, but can't define it easily. If I write some really clever code, is the system running it intelligent? Is some of my intelligence imbued into that system, or does it have some of its own? At what point does a stochastic process approach intelligence - when it has enough data? Does that mean babies aren't born with intelligence, but acquire it as they learn more? So many questions, good questions, necessary questions.

[1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


> does that if/else clause indicate "intelligence" that's been programmed in

Yes, systems built with if/else rules are intelligent, in that they reproduce some heuristic rule that someone had to think about. I think the term "expert system" is kind of appropriate for them

https://en.wikipedia.org/wiki/Expert_system

Of course they are not too intelligent, but, they are more intelligent than a rock, say. (unless this rock happen to be a chip running some neural network)


Agree fully - I've always used the term "expert system" as well, for the same distinction. An expert system operates on rules, created by a more "creatively intelligent" entity.

Though when we talk about AI now, we are talking about what the _public_ thinks "intelligence" means. This really muddies the conversation, because these technical distinctions we are making now fall completely flat. I think when you tell random people that we have created "artificial intelligence", they think we mean what I am referring to above as "creative intelligence", not "it executes an if/else rule" intelligent. So as AI becomes a topic in the mainstream, our conversations are likely to be completely misunderstood because of the colloquial meaning of the technical terms we use.


I have been working in human and artificial intelligence for several years now. It is not at all apparent to me that intelligence has a clear meaning.

I think the definitional problem is significant, but the problem is even more fundamental than that. A couple of years ago, I witnessed a heated disagreement at a conference about whether intelligence required learning or not.

Does intelligence require embodiment? Perception? Sensation? Agency? Intentionality? Cognition? Metacognition? Other higher cognitive/executive functions?

These concepts are not just to embellish the definition a bit. They are fundamental to what intelligence is.

A plausible definition you could find in a journal article might be: An embodied agent that can intentionally solve problems using information obtained from the external environment plus introspection, by way of various executive functions, and then affect the external environment accordingly.

Some philosophers and researchers would agree, and some would disagree, because of what they each think intelligence is.

> but I think artificial intelligence is a very useful terminology and concept, not a marketing term.

I agree that the term is fine. But it is absolutely also used as a nonsense marketing term. ChatGPT or one of the other OpenAI models was calling itself an "advanced AI system" or some such bullshit. The term itself is not to blame, but people definitely (mis)use it that way.


Why does there have to be one specific combination of these aspects that yields the one, true "intelligence"? The truth is that it is simply an extremely extremely broad spectrum. Any agent that performs useful computations can be argued to be intelligent in some way. It does not make sense to argue endlessly in an attempt to lift this lower bound. Let's just acknowledge that slime molds, bacteria and NPCs have some form of intelligence so we can move on and define more specific and useful types of intelligences: Embodied intelligence, social intelligence, learning intelligence, general intelligence etc. Of course these will always be vague spectra, as nailing anything clearly down is extremely technical.

Organizing things in sets is nice and all, but it is usually ill-defined on the boundaries and when it leads to endless discussions it's better to just think in terms of functionals. Some abstract functional tells you how intelligent, how embodied or how learning a certain agent is, according to some scale.


I'd strongly disagree that intelligence has a clear meaning. Depending on who you ask, you can get answers anywhere from "slime molds are intelligent" (they can hunt for food, solve mazes, etc) to "only humans are intelligent".


But would anyone disagree with "humans are more intelligent than slime molds"? This feels like a re-hash of the argument discussed in the article.


The problem with human intelligence is saying 'if a human can do it, it is intelligence' doesn't break down the problem space correctly. It also leaves potential gaps where a system could have a form of intelligence humans do not leading to humans misjudging that systems capabilities abd that could lead to disaster (common AI risk scenario).


Reminds me of the book “Other Minds”, where the author discusses consciousness in octopuses.


Most people you ask aren't going to have an opinion that is worth voicing even if they do happen to voice it. People love to think in binary, yes/no situations, but reality isn't kind enough to present many of those in complex situations. Instead most systems live in a gradient. Sorties paradox becomes our primary means of argument and not a varying set of classification systems that break down the different pieces that belong to systems that have intelligence.

Myself, I consider both examples intelligent, but one has vastly more capabilities.


> Intelligence has a pretty clear meaning.

Come on. We did this already, almost certainly long before you were born. Even the instrumentalists don't like it!

This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll.


By that reasoning "Chemistry" is just a marketing term. It was used mostly interchangeably with "Alchemy" until Georg Stahl and others began writing (without justification!) that "Chemistry" describes more a more rational science and "Alchemists" were more interested in wealth. These untrue claims still became a self-fulfilling prophecy, where theories, teachings, and practice of chemistry evolved out of the former ways of thinking to become what we know it as today.

AI, as used today, is certainly a coherent concept, and this article has justified and described it more thoroughly than most concepts I use daily. Who cares about the shaky history if it means something real today?


I think the article explains very well why "intelligence" is a useful and important concept. Namely because it is concise way to discuss abilities which are highly correlated without naming each ability individually.


Somehow the rationalist community started at empty platitudes like "correlation is not causation" yet has ended up at substantive yet amazingly wrong conclusions like "correlation is identity", amazing turn, I gotta say I did not see it coming.


I’m frankly surprised Judea Pearl’s name didn’t come up in this article, they love waving him around at times like this.


But correlation is correlation. At least, for most people's definitions of the word.


This reminds me of MRI which started out as Nuclear Magnetic Resonance Imaging, but dropped "Nuclear" because of the stigma it carried.


There's a less cynical explanation for that.

"NMR" sounds a lot like "enema" and you don't want to get the two mixed up.


I'd never realised that before. Thank you, I think.

Deary, deary me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: