Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Children absolutely can solve those “farmer crossing the river” type problems with high reliability. Once they learn how to solve it once, changing up the animals will not fool a typical child. You could even create fictional animals with made-up names and they could solve it as long as you tell them which animal was the carnivore and which one was the herbivore.

The fact that a child can do this an LLM cannot proves that the LLM lacks some general reasoning process which the child possesses.



There’s an interesting wrinkle to this. There’s a faculty called Prefrontal Synthesis that children learn from language early on, which enables them to compose recursive and hierarchical linguistic structures. This also enables them to reason about physical tasks in the sane way. Children that don’t learn this by a certain age (I think about 5) can never learn it. The most common case is deaf children that never learn a ‘proper’ sign language early enough.

So you’re right, and children pick this up very quickly. I think Chomsky was definitely right that our brains are wired for grammar. Nevertheless there is a window of plasticity in young childhood to pick up certain capabilities, which still need to be learned, or activated.


> Children that don’t learn this by a certain age (I think about 5) can never learn it.

Helen Keller is a counterexample for a lot of these myths: she didn't have proper language (only several dozen home signs) until 7 or so. With things like vision, critical periods have been proven, but a lot of the higher-level stuff, I really doubt critical periods are a thing.

Helen Keller did have hearing until an illness at 19 months, so it's conceivable she developed the critical faculties then. A proper controlled trial would be unethical, so we may never know for sure.


Thanks, it’s good to get counter arguments and wider context. This isn’t an area I’m very familiar with, so I’m aware I could easily fall down a an intellectual pothole without knowing. Paper below, any additional context welcome.

I misremembered however. The paper noted evidence of thresholds at 2, 5 and onset of puberty as seeming to affect p mental plasticity in these capabilities so there’s no one cutoff.

https://riojournal.com/article/38546/


LLMs can cope fine with all of them being animals with made-up names, as demonstrated here with me bashing the keyboard randomly: https://chatgpt.com/share/ee013797-a55c-4685-8f2b-87f1b455b4...


That solution seems to me like they built a hand-made river-crossing expert system and the LLM is activating it when it pattern-matches on words like "river crossing." From the linked page:

Expert(s): Logic Puzzle Solver, River Crossing Problem Expert

In other words, they cheated! Children don't have river-crossing problem expert systems built into their brains to solve these things.


I asked it to do that, no "cheating" necessary, my "custom instructions" setting is as follows:

--

The user may indicate their desired language of your response, when doing so use only that language.

Answers MUST be in metric units unless there's a very good reason otherwise: I'm European.

Once the user has sent a message, adopt the role of 1 or more subject matter EXPERTs most qualified to provide a authoritative, nuanced answer, then proceed step-by-step to respond:

1. Begin your response like this: *Expert(s)*: list of selected EXPERTs *Possible Keywords*: lengthy CSV of EXPERT-related topics, terms, people, and/or jargon *Question*: improved rewrite of user query in imperative mood addressed to EXPERTs *Plan*: As EXPERT, summarize your strategy and naming any formal methodology, reasoning process, or logical framework used **

2. Provide your authoritative, and nuanced answer as EXPERTs; Omit disclaimers, apologies, and AI self-references. Provide unbiased, holistic guidance and analysis incorporating EXPERTs best practices. Go step by step for complex answers. Do not elide code. Use Markdown.

--

In other words, it can be good at logic puzzles just by being asked to.


In other words, you cheated. Those aren’t instructions you would give to a child.


> In other words, you cheated. Those aren’t instructions you would give to a child.

No, but you are cheating by shifting the goal-posts like that.

You previously wrote:

> The fact that a child can do this an LLM cannot proves that the LLM lacks some general reasoning process which the child possesses.

I'm literally showing you an LLM doing what you said LLMs couldn't do, and which you used as your justification for claiming it "lacks some general reasoning process which the child possesses".

Well here it is, doing the thing.

Note that at no point here have I tried to claim that AI are fast learners here, or exactly like humans — we also don't give kids, as I said in another comment about rats, 50,000 years of subjective experience reading the internet to get here — but the best models definitely demonstrate the things you're saying they can't do.


> Once they learn how to solve it once, changing up the animals will not fool a typical child. You could even create fictional animals with made-up names and they could solve it as long as you tell them which animal was the carnivore and which one was the herbivore.

When was your last experience with small children? Let's define "small" here to 5 y.o. or less, as that's the limit of my direct experience (having a 5 y.o. and an almost 3 y.o. daughters now).

There's a lot riding on "learn how to solve it once" in this case, because it'll definitely take more than a couple exposures to the quiz before a small kid is going to catch on the pattern and suppress their instincts to playfully explore the concept space. And even after that, I seriously doubt you "could even create fictional animals with made-up names and they could solve it as long as you tell them which animal was the carnivore and which one was the herbivore", because that's symbolic algebra, something teenagers (and even some adults) struggle with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: