A lot of companies seem to have a poor understanding of what AI is. Lots treat it like an ancient soothsayer.
Heard too many stories of companies trying to use it to predict equipment failures or something along those lines. Did they have failures in their datasets? No. Obviously they did not succeed.
A lot of AI hype reminds me of the hype of data warehouses. The amazing predictions you were going to get. Turns out getting the data and/or using 'AI' is easier than asking the right questions and figuring out what to do when you get a particular result. Both of these tools are good at what they do. But the predictions of what they can do seem a bit off the mark. Even the failure modes seem to be mirroring a lot of what happened with data warehouses 'your data is not right' 'your data is in the wrong form' 'your filters were not right' 'you used the wrong topology' and so on.
Not saying anything bad here it is just an interesting observation of 'history repeating'.
>A lot of companies seem to have a poor understanding of what AI is.
Yes, this is true, but a lot of this poor understanding can be tracked down to the over-enthusiasm (and straight overselling speak, mind you) of most of the AI scientists.
The company I work for (mining) hired a top-notch AI guy to "change the fundaments of data management and processing in the company, using the latest AI technology". Guy left six months later... I still wonder why he lasted that long.
Not only that but good anomaly detection is easy to mess up you have to be careful with how you amplify the anomalies in the dataset so that you don't wind up predicting garbage anyway.
Heard too many stories of companies trying to use it to predict equipment failures or something along those lines. Did they have failures in their datasets? No. Obviously they did not succeed.
They then declared AI/ML a failure.