Hacker News new | past | comments | ask | show | jobs | submit login
How to talk to someone who doesn't trust AI (generatingconversation.substack.com)
5 points by vsreekanti 10 months ago | hide | past | favorite | 6 comments



This article misses the mark on every level for me. I would have thought that the article was talking about how to sell AI to laymen, but then it says "These are almost all very technical people". That's me.

I disagree with the assertion that "I don't trust AI". It's not a question of trusting or not trusting "AI". It's a question of trusting or not trusting the people and companies who are capitalizing on it, and I don't trust them even a little bit.

The rest of the objections they list are ones I've heard some people express, but none of them are remotely why I view the entire field with suspicious eyes.

I absolutely see the utility of this stuff. I work on ML and deep learning systems for my day job, and so the utility is obvious to me. My issue is that the people working on LLMs/generative AI seem to be refusing to address the rather serious risks and downsides.

I'm not talking about ridiculous notions like rogue AI taking over the world and the like. I'm talking about about the deep social and economic dangers that will inevitably be posed by how some people and corporations will use this stuff.

What concerns me the most is that LLMs/genAI will seriously decrease the amount of trust people can have in society when the trust level is already at a crisis point. Without a certain amount of trust, society cannot function.


To me there's an intermediate and more-immediate kind of distrust going on: Everyone should distrust and avoid any tool which unpredictably fails in serious ways that are hidden from the user.

For example, if you were digitizing important blueprints and financial/legal documents before trashing the originals, but the scanning process was unpredictably changing numbers inside text [0] so that your copies were wrong in small but incredibly serious ways.

In the case of LLMs (or the example scanner) the tool will eventually yield answer which looks correct but is dangerously wrong, and the amount of continuous human fact-checking--itself another form of distrust!--needed can be so high that any labor-savings are wiped out.

[0] http://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_...


> Converting skeptics means moving from vibes-based evals to empirically understanding the value LLMs are creating

Converting advocates means moving from vibes-based hype to empirically understanding the limits/risks LLMs are creating.


Ding ding ding! Now bring the handsome winner their cigar.


For me personally it's less of an issue of trusting or distrusting software but rather the people operating it and those tuning and controlling algorithms. Once bitten by social media algorithms I would find it challenging to put faith in LLM big data operators not to start warming up the frogs or induce creeping normality once deployment has reached mass adoption in all manor of tools and interfaces. The allure of control and the side effects of greed can create overwhelming temptation for a myriad of organizations to weaponize a formerly benign software platform.


Advertorial alert.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: