Hacker News new | past | comments | ask | show | jobs | submit login

> Whether they apply "reasoning" or "extremely multidimensional synthesis across hundreds of thousands of existing solutions" is a question of semantics.

I see where you're coming from - if it's good enough that we can't distinguish it, then does any difference really matter? I submit it's fundamentally different. This is essentially the Chinese Room thought experiment [1] or a nice similar metaphor with an octopus from Section 4 of this paper [2].

The trouble is not in its ability but human's interpretation it. Humans see an "avocado chair" and think this AI can invent art and concepts. Producing combinations of existing concepts is not "that hard". Even for combinations that have never existed before.

Meanwhile, it's failing at basic tasks: you can find plenty of examples of it failing basic logic, anything with math or arithmetic, a lot of ethics/bias concerns stemming from the training data, etc.

I think when we look forward to an AI that "reasons" this is not what anybody would mean.

Current AIs are bullshit engines. They are very impressive, and probably even useful. They are a milestone. But they are not reasoning in any meaningful way. And if you look at the math behind them there's really no reason to think they would.

So I guess given a methodology that seemingly shouldn't produce reasoning ability, and no evidence that it has so far, sure, maybe scale will magically unlock it. There's always a chance I guess. But it doesn't really seem too sensible.

See Sam Altman's take as well on Twitter here [3]

[1] https://en.wikipedia.org/wiki/Chinese_room

[2] https://aclanthology.org/2020.acl-main.463.pdf

[3] https://twitter.com/sama/status/1601731295792414720




> I see where you're coming from - if it's good enough that we can't distinguish it, then does any difference really matter? I submit it's fundamentally different. This is essentially the Chinese Room thought experiment [1] or a nice similar metaphor with an octopus from Section 4 of this paper [2].

I'm trying to follow your reasoning, but I get stuck right on this line. If you have two systems that are implemented in different ways but the output is indistinguishable I feel that you're forced to claim that the systems operate in fundamentally similar ways. I'm actually confused how anyone could claim the opposite!

I've read about the Chinese Room too and I have roughly the same reaction. I feel like the Chinese Room is akin to saying that any particular neuron in your mind doesn't know how to think. To my view, the guy in the Chinese Room is the same as another one of the many neurons in your head, and it's the system itself which has consciousness, not any constituent part.


> If you have two systems that are implemented in different ways but the output is indistinguishable I feel that you're forced to claim that the systems operate in fundamentally similar ways.

So a nuclear power plant and a solar panel farm each generating 4500 MW are operating in fundamentally similar ways?

I mean... in some sense, yeah. They both rely on electromagnetic effects to generate electricity.

They still seem to be operating in really different ways to me, though.


Sure .. I mean one is providing 4500 MW from a fission reaction whereas the other is providing 4500 MW from a fusion reaction . . . but fundamentally both are providing nuclear sourced power.


My point was about the self-contained systems, not the rest of the universe around them.


True enough, mine was that in the context of a discussion about reasoning about reasoning (if it is fact reason), you might need better metaphors.

I don't hold with a lot of the evolutionary claims put forward in Jaynes Origin of Consciousness.. but he does put forward a solid intitial discussion of "WTF is intelligence anyway" that's worth the read (if you've not read it and have the interest).


I think you're right, and the distinction is an important one. For example, current models can produce art in an impressionist style, but could they invent impressionism given no training data? I don't think that they could.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: