Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My feeling is that GOFAI had a real problem with representing uncertainty, and handling contradiction. So, we tried to approach it theoretically, with fuzzy logic and probability and so on. But the theoretical research on uncertainty didn't reach any clear conclusion.

Meanwhile, the neural nets (and ML) researchers just trucked on, with more compute power, and pretty much ignored any theoretical issues with uncertainty. And surprisingly, with lots of amazing results.

But now they hit the same wall, we don't actually understand how to do reasoning with uncertainty correctly. LLMs seem to solve this by "just mimic reasoning that humans do". Except because we lack a good theory of reasoning, it can't tell when mimicking is bad and when it's good, unless there is a lot of specific examples. So in the most egregious cases, we get hallucinations but have no clue how to avoid them.



I think that ascribes way too much meaning to hallucinations, which are the artifact of a big fancy markov chain doing what you'd expect a big fancy markov chain to do.


I don't get your argument about the frame problem. Maybe it's like squeezing a big pillow inside a small bag. A bulge forms that won't fit. It's the frame problem. Turn the pillow around, squeeze it into the bag again: a bulge now forms on the opposite side: it's the hallucination problem. I can see how one could be the solution to the other. Hallucinations as a lack of rules.


That's an excellent summary I have to say. Theorists pushed hard to move the needle and practitioners with immense computing power reached and started chipping away at the same wall.

LLMs transpose the problem by mimicing what humans would do




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: