"The idea of a neural network dated back to the 1950s, but the early pioneers had never gotten it working as well as they’d hoped. By the new millennium, most researchers had given up on the idea, convinced it was a technological dead end and bewildered by the 50- year- old conceit that these mathematical systems somehow mimicked the human brain."
This is not only false, but in the context actually intentional misrepresentation. Most of the issues with the model was solved by introduction of hidden layers and backpropagation learning, which is at least in my opinion required knowledge in CS since at least early 90's and probably earlier (it is not clear when the idea was formulated in usable form, put the most cited publications are from late 80's, eg. Rumelhart, D., Hinton, G. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986). https://doi.org/10.1038/323533a0).
On the other hand obviously more complex modern approaches to the "throw bunch of poorly understood linear algebra at he problem" problem have value and there is definitive generational shift in the current "AI-anti-winter" (for lack of better word), but still...
I graduated in 2010 with a degree supposedly in "CS with AI specialization" and was taught that neural networks weren't useful for much of anything at the time.
The best techniques we learned were MCMC and random forests, our computer vision was OpenCV and didn't work so well, and there wasn't any suggestion that buying a lot of GPUs and not bothering to understand the problem space would produce better results than our laptops.
Backpropagation was probably the most important step, but there was also the issue of workable training methods for many-layer networks. Fukushima did some very important - but rarely mentioned - work about this starting in 1979.
He trained layer by layer, proceeding to the next only when the current stabilized. The main practical issue were the enormous training and execution times with the computers of the day.
It's mostly right. "Early pioneers" is wrong, since the idea of neural networks predates backpropagation, but by the turn of the millenium they really were regarded as old fashioned, and attention had moved to things like support vector machines and Adaboost.
This is not only false, but in the context actually intentional misrepresentation. Most of the issues with the model was solved by introduction of hidden layers and backpropagation learning, which is at least in my opinion required knowledge in CS since at least early 90's and probably earlier (it is not clear when the idea was formulated in usable form, put the most cited publications are from late 80's, eg. Rumelhart, D., Hinton, G. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986). https://doi.org/10.1038/323533a0).
On the other hand obviously more complex modern approaches to the "throw bunch of poorly understood linear algebra at he problem" problem have value and there is definitive generational shift in the current "AI-anti-winter" (for lack of better word), but still...