Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think I can answer this for you.

You ask ChatGPT if combining bleach and [chemical] is bad. It says there can be some dangerous irritation to the lungs, and not to do it.

You prompt it a couple times with new data that suggests the reaction could be worse, but its story doesn't really change.

So, you put on a respirator, mix the concoction - and it melts through 6 floors of your building while emitting neurotoxic gas.

ChatGPT told you it would be bad. It told you not to do it. But there's a huge difference between what it said would happen and what was actually foreseeable.



First, what a weird example.

Also, the IPCC is not saying that there can be "some dangerous irritation to the lungs". The IPCC is describing catastrophic outcomes. And climate scientists are saying that it is most likely optimistic.

Back to your weird example, it's more akin to ChatGPT saying "don't do it because you will probably die", and you complain because you want ChatGPT to tell you that you will most likely die and that on top of that it will be painful.

The point is that if you don't wanna die and still do it, then you are the problem here, not ChatGPT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: