Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel the article conflates AI with the broader concepts of business intelligence and data science. If the “learning” is happening on the humans side, you’re essentially just collecting and analyzing data.

Or is the article literally suggesting the 10% of companies that profit off of AI are the 10% “learning” by retraining their models on newer data? To me that is akin to saying companies that maintain their websites tend to be more profitable, it’s not that much of a revelation.



Stuff like "mutual learning" is annoyingly general. I feel it's like the kind of business speak that's so high in the upper atmosphere as to be barely applicable on the ground.

I don't have the patience to suss out whether they're talking about a deep broad concept or just being too vague.

For instance:

> Organizational learning with AI is demanding. It requires humans and machines to not only work together but also learn from each other — over time, in the right way, and in the appropriate contexts. This cycle of mutual learning makes humans and machines smarter, more relevant, and more effective. Mutual learning between human and machine is essential to success with AI. But it’s difficult to achieve at scale.

I feel this is talking about a problem at my company -- I had a lot of stumbling blocks implementing AI predictions because the size of packages are stored as strings, weight information is inconsistent, etc. etc. so it's very hard to categorize things based on what's in the database.

Perhaps that's what they mean from "humans learning from machines and machines learning from humans" -- actually following decent data standards because you have to if you want to do anything with it -- but damn, just say that.

I feel this is a silly way of saying "maybe we should have actually listened to the programmers from the very beginning when they were talking about doing things the right way."


Honest question: is just collecting and analyzing data not considered a use case for AI? I mean sure some businesses can be essentially run by a recommender system or something. But it also seems valuable if you can do something like use NLP to get better quantification of customer feedback, which is really just collecting and analyzing data for a human to use later.


If it’s humans making observations in the data, changing (non AI parts) of their software stack, and reanalyzing the data it sounds like there’s no supervised or unsupervised machine learning going on, it could be, but the article is too hand wavy for me to be sure. That’s why I think a more accurate title is “companies realize most of the work in AI comes down to feature engineering and data prep, not algorithm design”


Absolutely. The current state of AI/ML practically requires a human to interpret the results. Even a recommender system is just collecting and analyzing data for humans to consume in some way. And that system needs to be maintained and retrained regularly to produce meaningful results.


This is only true of supervised offline learning


wrong. I have trained plenty of unsupervised models where the inferred results are purely for human consumption. In fact I believe unsupervised models more often require a human to analyze the results.


Whether an algorithm is supervised or not certainly does affect whether you need to retrain it periodically. Also it does not at all affect whether the output is fed to end users or to other algorithms


I feel like maybe we are saying the same thing but using different words.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: