Hacker News new | past | comments | ask | show | jobs | submit login

Until I read your link (which was very interesting BTW) I interpreted that quote the other way: Is it your [the parent commenter] belief that we [AI researchers] are all idiots?" I mean, sure there's a lot of marketing junk, but it's almost as naive to assume that there's no substance as it is to believe everything the marketers say.

Having said that, I don't think any serious AI researcher would claim that their creations are close to actually thinking.




> I don't think any serious AI researcher would claim that their creations are close to actually thinking.

I don't think anybody seriously believes that -- but setting reasonable public expectations from emerging technologies is at least in part the responsibility of the "technically literate class" broadly defined, research scientists, practitioners, even developers in ML/AI-adjacent fields such as myself.

Stuff like "GPT3 is coming for your job" is already out to the press. Nobody thinks that, nobody has even said that, but nobody has done anything to set expectations straight either.

We ought to be more responsible in this respect.


> Nobody thinks that, nobody has even said that

Some said it though, I remember a prominent AI professor talking about radiologists losing their jobs on twitter and the ensuing mess surrounding it.

Except that, I agree strongly with everything else you pointed out.


Eh, they're not wrong. Given another 20-30 years of progress at this rate, it's coming for almost every knowledge worker's job.

Which is pretty freaking cool.


It's been 20-30 years in the future for several decades now.


Yes, and something finally happened. A hell of a lot happened, actually, starting with AlphaGo and progressing to a credible Turing-test contestant only a couple of years later.

But yes, your deeper point is hard to dispute; this business has always run on punctuated equilibrium. We may be in for another 50+ years of "AI winter." I doubt it, though. It feels different this time.


I interpreted it the same way. When millions of people work on or support a subject, be it AI, cloud computing, or football. Some of them must be way smarter, more ethical, more reasonable, more successful than me. I may find the subject dumb, but them I ask myself: why do these people do that? Or if it pays well, how come people don't do it instead of flipping burgers for a measly pay if it is so easy?

It doesn't mean it is right, but it is a good reminder that there is a high chance it is just me missing something.

As for the "marketing" thing, it is hard even for a full time teacher not to treat their audience as idiots and avoid losing them completely. Good teachers, when unsure, prefer to go the "idiot" path, not ideal, but at least your audience may understand something in the end.


What i would really like to see is some Scientist(s)/Researcher(s) from the Statistics/AI/ML field rise up in arms and deliver verbal/in-print smackdowns (in the spirit of Edsger Dijkstra) setting the record straight to every hack/company/management/whoever person(s) out there selling AI/ML snake-oil to the masses.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: