Hacker News new | past | comments | ask | show | jobs | submit login

I have an idea how to test whether AI can be a good scientist:

Train on all published scientific knowledge and observations up to certain point, before a breakthrough occurred. Then see if your AI can generate the breakthrough on its own.

For example, prior to 1900 quantum theory did not exist. Given what we knew then, could AI reproduce the ideas of Planck, Einstein, Bohr etc?

If not, then AI will never be useful for generating scientific theory.






I don’t think this is the main point of the paper. They’re not claiming that AI is capable of scientific breakthroughs. Rather, they argue that AI excels at summarising vast amounts of existing scientific knowledge.

That's literally what "knowledge synthesis" is. Not just summarizing, but "the combination of ideas to form a theory or system."

Breakthroughs are just a special case of synthesis.


Formally speaking, breakthroughs are not simply a subset of synthesis, as they can exist outside the realm of prior knowledge.

Or just have the AI generate new specific experimental setups and parameters that we can try and be like "oh yeah, we just made a room temperature superconductor".

Honestly given what we know about physics, the AI should be able to simulate physics within itself or deduce certain things we've missed.


> Honestly given what we know about physics, the AI should be able to simulate physics within itself or deduce certain things we've missed.

If by "AI" you mean language models, then no, it will not "be able to simulate physics within itself". No way.


It can simulate basic problems well enough when viewed as a black box. Give it one of Galileo's experiments.

Oh no I mean if we claim we have an AGI and it's true, it should be able to do that. LLMs are not that

Fair enough.

And in fact, I think that's an interesting line to consider for determining if something is in fact an AGI.


Discover quantum mechanics or you’re a failure!

I hope your approach with your kids is a bit more nuanced.


Is your second sentence sincere? Attacking someone's parenting to win rhetorical points on an unrelated topic is pretty low.

How dare he have high expectations of the AI product!

High expectations are one thing, and I’m an AGI skeptic, but when did being the smartest person ever become a requirement of AGI?

Since always. That's what AGI means.

AGI doesn’t have a universally accepted definition.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: