Sorry but it's just wrong to call it a hype bubble because there are real and tangible uses of ML in production today on massive scaled systems that give extreme outsized results. All of GAAFM rely on ML at scale at this point for a large subset of their products.
It's just that these results are also EXTREMELY unevenly distributed - and good luck breaking into anything that can use applied ML and not get crushed by GAAFM.
However I agree with your point here:
>If an executive or similar is talking about "AI" or "Machine Learning" without the ability to identify specific use cases that they're hoping to implement, then that is a huge red flag.
So it's not as simple as "AI is Hype" - it's not hype - it's just that most organizations will struggle to actually implement it because all the data/talent/compute etc... sits in GAAFM.
It’s a hype bubble due to the delta between what’s being promised, a revolution of basically every aspect of business, and what’s delivered, massive improvements in very specific domains where a large data set is available and easily classified.
It’s unquestionable that there are certainly areas that ML is delivering in spades, but it’s nowhere near as ubiquitous as the hype implies.
Even for the behemoth companies that are able to harness AI, it seems like the domains are a) heavily constrained b) fault tolerant. For example, voice assistants - they have very limited capabilities and consumers will accept pretty poor performance. Look at the errors in Google's attempts to automatically answer questions in searches.
Do you have any examples of domains where a FAANG has operationalized AI/ML outside of consumer products?
Any insight into the actual methodology? I couldn't find specifics, but I would be curious what their baseline condition is.
I wonder if the baseline case is "no control optimization" or if it was based on current control best-practices. For example, one article claims it produces cooler water temperature than normal based on outside conditions. This is a best practice in good energy management through wet-bulb outdoor air temperature reset strategies without using ML. If their 40% savings was above and beyond these best practices, that's a pretty big accomplishment. If it's based on the static temperature setpoint scenario (i.e. non best practice), it's less so.
Edit: after skimming [1], it seems like their baseline condition was the naive/non-best practice approach. I'm not discounting the potential for ML, but I think a more accurate comparison should use traditional "best practice" control strategies, not a naive baseline condition. In some cases, it seems like the ML approach identified would be less advantageous than current non-ML best-practices (e.g., increasing cooling tower water by a static 3deg rather than tracking with a wet-bulb temperature offset)
"In fact, the model’s first recommendation for achieving maximum energy conservation was to shut down the entire facility, which, strictly speaking, wasn’t inaccurate but wasn’t particularly helpful either."
> if you showed a google home to someone in 1980s they would be absolutely floored.
I am not so sure about that. If you came from a time machine and said "This is an AI from the year 2020", they would try and converse with it and quickly realize it's unable to converse. People from the 80's would probably assume by the year 2020 they'd have sentient robots and be disappointed when all it can do is turn on the lights when asked a specific way.
Well FAANG all produce consumer products, so that wipes out a bazillion legitimate applications, but you've still got that Facebook and Google sell ads, which uses AI for targeting. Data centre cooling was already mentioned, but did you know lithography now uses ML? There's even work on using ML for place and route.
It's just that these results are also EXTREMELY unevenly distributed - and good luck breaking into anything that can use applied ML and not get crushed by GAAFM.
However I agree with your point here:
>If an executive or similar is talking about "AI" or "Machine Learning" without the ability to identify specific use cases that they're hoping to implement, then that is a huge red flag.
So it's not as simple as "AI is Hype" - it's not hype - it's just that most organizations will struggle to actually implement it because all the data/talent/compute etc... sits in GAAFM.