Hacker News new | past | comments | ask | show | jobs | submit login

> Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.

This is where I see the greatest dangers, because we are boldly applying LLMs where they don't belong. As long as the consequences only affect the experimenter I couldn't care less, but when it impacts others it should be treated as criminal negligence.




They will jam "AI" everywhere they can to increase stock price, there will be no safety regulations, welcome to idiocrasy.


Absolutely.

The danger is that people will think that AI is thinking and reasoning in a way that they are. But it isn't. It's a glorified template generator, at least for now.

Our brains and minds are fat more sophisticated and nuanced than the LLM models we've built in the last few years. It'd be crazy if they weren't.


A bit of a tangent, but I suggest you could care a bit.

Imagine your son/daughter use a LLM to diagnose their medical symptoms with disastrous consequences.

Now imagine it is some other family member.

Now a friend.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: