Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see two main types of 'AI safety': (a) Safety for the business providing the model. This includes a censorship layer, system promoting, & other means of preventing the AI from giving offensive/controversial/illegal output. A lot of effort goes into this & it's somewhat effective, although it's often useless or unhelpful to end users & doesn't address big-picture concerns. (b) The science fiction idea of a means to control a hypothetical AI with unbounded powers, to make sure it only uses those powers "for good". This type of safety is still speculative fiction & often assumes the AI will have agency & motivations, as well as abilities, that we see no evidence of at present. This would address big-picture concerns, but it's not a real thing, at least not yet.

It remains to be seen whether (b) will be needed, or for that matter, possible.

There are a lot of other ethical questions around AI too, although they mostly aren't unique to it. E.g. AI is increasingly relevant in ethical discussions around misinformation, outsourcing of work, social/cultural biases, human rights, privacy, legal responsibility, intellectual property, etc., but these topics predate LLMs by many years.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: