> Likewise, we should confine AI decision-making systems to applications that suit their actual abilities—while keeping the potential ramifications of their mistakes firmly in mind.
This is where I see the greatest dangers, because we are boldly applying LLMs where they don't belong. As long as the consequences only affect the experimenter I couldn't care less, but when it impacts others it should be treated as criminal negligence.
The danger is that people will think that AI is thinking and reasoning in a way that they are. But it isn't. It's a glorified template generator, at least for now.
Our brains and minds are fat more sophisticated and nuanced than the LLM models we've built in the last few years. It'd be crazy if they weren't.
This is where I see the greatest dangers, because we are boldly applying LLMs where they don't belong. As long as the consequences only affect the experimenter I couldn't care less, but when it impacts others it should be treated as criminal negligence.