Thinking on the "the halcyon years of AI," in the 70s and (maybe?) 80s, was there anything really "missed" then, or have most of the modern advances in AI only really been possible due to the increase in computing power?
To put it another way, if you were transported by to 1979 knowing what you know now (or maybe able to bring a good book or two on the subject) would you be able to revolutionize the field of AI ahead of its time?
I'd say yes. You could bring Support Vector Machines [0], bring the Vapnik-Chervonenkis theory of statistical learning [1]. You could fast forward the whole field of Reinforcement Learning, you would be able to show that it's possible to solve problems like passing gradients through a sampling process using the reparameterization trick [2], you would be able to show that you can pass gradients through a sorting process [3].
You would also have experience of working with autodiff software and build such. Imho the advent of autograd, tf, torch and so on helped tremendously in accelerating progress and research because the focus of development is not on correctness anymore.
I took the MIT AI course 6.034 in 1972 from Patrick Winston. He taught that course periodically until his passing a couple years ago. The 2016? Lectures on MIT opencourseware. I would estimate there was 2/3 overlap between the 1972 and 2016 versions. That course is heavy on heuristics and not big data.
Around 1979 an MIT group lead by Gerald Sussman (still working) designed a workstation specifically to accelerate LISP. It was hypothesized a computer that ran LISP a thousand times faster would revolution AI. It did not. However the two LISP workstations that saw commercial sales did jump start the interactive graphics workstation market (UNIX, C and C++). Dedicated language machines could not keep up with the speed improvements of general CPUs.
On the other hand custom neural chips from Google, Apple, Nvidia (and soon MicroSoft) have really helped AI techniques based upon deep convolutional neural networks. Neural chips run orders of magnitude faster than general CPUs by using simpler arithmetic and parallelism.
> It was hypothesized a computer that ran LISP a thousand times faster would revolution AI. It did not. However the two LISP workstations that saw commercial sales did jump start the interactive graphics workstation market
It's very fitting then that GPUs have been so key in modern ML.
You might be able to help a bit sure. There are some algorithmic improvements that have been made, so you could bring those back in time. Then you could just assure people that if the spent the time to develop huge datasets, and do enough parallel computation that you can get good results. But it would have been very slow back then.
To put it another way, if you were transported by to 1979 knowing what you know now (or maybe able to bring a good book or two on the subject) would you be able to revolutionize the field of AI ahead of its time?