Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Though there may be real limits to AI, the author's superficial treatment of the matter includes few facts, and no understanding of how the techniques and mathematics underlying modern machine learning are substantially different than what researchers were focusing on in the 80s.


>substantially different than what researchers were focusing on in the 80s.

??

Weren't the neural network stuff and evolutionary algos all hip in mid-eighties?


Simple perceptrons, yes. Feed forward, RNNs, CNNs, SVMs, gradient methods, and the rest? Not so sure about that. I know that genetic algorithms do sometimes get discussed today, but they are a small part of the community discussion, IMO. Not to mention that research in non-linear optimization and attendant numerical methods definitely made some breakthroughs in the 1990s.


I'll go as far to say that it didn't matter if we had the secret software algorithm to intelligence in the 80s, we couldn't execute it on the necessary hardware. The only examples of learning/intelligence we have are executed on massively parallel systems (brains). Only in the past 5 years or so do we have systems on the necessary scale (GPU/ASIC).


I don't think so - I was involved in AI research from about '89 to '95 and the field was pretty much dominated by symbolic/logical approaches at that time although these were arguably running out of steam (I left the field because I blundered into the web in '92 and founded a startup in '95).


Computers aren't substantially different from 18th century looms. The perception that computers are intelligent is a mere illusion.


Human brains aren't substantially different from mosquito brains. The perception that humans are intelligent is a mere illusion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: