Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Now it's clear that knowledge graphs are far inferior to deep neural nets

No. It depends. In general, two technologies can’t be assessed independently of the application.



Anything other than clear definitions and unambiguous axioms (which happens to be most of the real world) and gofai falls apart. Like it can't even be done. There's a reason it was abandoned in NLP long before the likes of GPT.

There aren't any class of problems deep nets can't handle. Will they always be the most efficient or best performing solution ? No, but it will be possible.


They should handle the problem of hallucinations then.


They are working on it. And current large language models via eg transformers aren't the only way to do AI with neural networks nor are they the only way to do AI with statistical approaches in general.

Cyc also has the equivalent of hallucinations, when their definitions don't cleanly apply to the real world.


Bigger models hallucinate less.

and we don't call it hallucinations but gofai mispredicts plenty.


> Bigger models hallucinate less.

I'm skeptical. Based on what research?


GPT-4 hallucinates a lot less than 3.5. Same with the Claude Models. This is from personal experience. There are also benchmarks (like TruthfulQA) that try to measure hallucinations that show the same thing.


> Anything other than clear definitions and unambiguous axioms (which happens to be most of the real world) and gofai falls apart.

You've overstated/exaggerated the claim. A narrower version of the claim is more interesting and more informative. History is almost never as simple as you imply.


I really haven't.


> There aren't any class of problems deep nets can't handle. Will they always be the most efficient or best performing solution ? No, but it will be possible.

This assumes that all classes of problems reduce to functions which can be approximated, right, per the universal approximation theorems?

Even for cases where the UAT applies (which is not everywhere, as I show next), your caveat understates the case. There are dramatically better and worse algorithms for differing problems.

But I think a lot of people (including the comment above) misunderstand or misapply the UATs. Think about the assumptions! UATs assume a fixed length input, do they not? This breaks a correspondence with many classes of algorithms.*

## Example

Let's make a DNN that sorts a list of numbers, shall we? But we can't cheat and only have it do pairwise comparisons -- that is not the full sorting problem. We have to input the list of numbers and output the list of sorted numbers. At run-time. With a variable-length list of inputs.

So no single DNN will do! For every input length, we would need a different DNN, would we not? Training this collection of DNNs will be a whole lot of fun! It will make Bitcoin mining look like a poster-child of energy conservation. /s

* Or am I wrong? Is there a theoretical result I don't know about?


>Even for cases where the UAT applies (which is not everywhere, as I show next), your caveat understates the case. There are dramatically better and worse algorithms for differing problems.

The grand goal of AI is a general learner that can at least tackle any kind of problem we care about. Are DNNs the best performing solution for every problem? No and I agree on that. But they are applicable to a far wider range of problems. There is no question what the better general learner paradigm is.

>* Or am I wrong? Is there a theoretical result I don't know about?

Thankfully, we don't need to get into theoreticals. Go ask GPT-4 to sort an arbitrary list of numbers. Change the length and try again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: