AGREE (but it's irrelevant to (1), and also a pretty old conjecture.)
5) Subpoint: It's not true that 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.
PROBABLY (that this number/timeframe is optimistic means very little. one decade after the Internet many people said it hadn't upended industry as predicted. whether it took 10, 20, or 30 years, the important fact is that the revolution happened.)
It would be interesting to know if those who are agree in the comments agree with the sensational headline or point 1, or the more obvious and less consequential points 2-5.
I think a lot of people are young and don't realize what it was like working 20 years ago. When I started working in 1994, most people didn't even have computers on their desk let alone an email address. The web was brand new (we used things like gopher and archie), and only a handful of people had ever used or seen the internet.
Sure, I suppose it's possible that advances we've seen in AI won't be translate into huge productivity gains, but I would think that extremely unlikely.
another point is that Linear Regression IS Machine/statistical Learning. Sure its been around for more than 100 years before computation, but regression algorithms are learning algorithms.
Arguing for more linear regression to solve a firms problems, is equivalent to arguing for machine learning. Now, if instead he wanted to argue that the vast majority of a businesses prediction problems can be solved by simple algorithms, that is most likely true. but economic impact of this is still a part of the economic impact of machine learning.
If we're classing linear regression as machine learning and agreeing it's a representative example of the type of simple algorithm that's most likely to benefit firms, I think it probably helps his point rather than harming it. It's a technique that's been around for ages, is far from arcane knowledge and every business has had the computing capability to run useful linear regressions on various not-particularly-huge datasets in a user-friendly GUI app for at least a couple of decades now.
For the most part they haven't run those regressions at all, and where they have, they haven't been awe-inspiringly successful in their predictions, never mind so successful the models are supplanting the research of their knowledge-workers.
This overshoots the target. It's like saying that we use algebra and therefore =/= AI.
LR and general regression schemes are captured in supervised learning methods. So yes, the systems use linear regression as a fundamental attribute but build on them significantly.
Notice now you can cogently disagree with the main idea while agreeing with most of the sub points (paraphrasing below):
1) Most impactful point: The economic impact innovations in AI/machine learning will have over the next ~2 decades are being overestimated.
DISAGREE
2) Subpoint : Overhyped (fashion-induced) tech causes companies to waste time and money.
AGREE (well, yes, but does anyone not know this?)
3) Subpoint: Most firms that want AI/ML really just need linear regression on cleaned-up data.
PROBABLY (but this doesn't prove or even support (1))
4) Subpoint: Obstacles limit applications (though incompetence)
AGREE (but it's irrelevant to (1), and also a pretty old conjecture.)
5) Subpoint: It's not true that 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.
PROBABLY (that this number/timeframe is optimistic means very little. one decade after the Internet many people said it hadn't upended industry as predicted. whether it took 10, 20, or 30 years, the important fact is that the revolution happened.)
It would be interesting to know if those who are agree in the comments agree with the sensational headline or point 1, or the more obvious and less consequential points 2-5.