Bit of a stretch to label an organisations shitty website security an AI incident just because it's a portal to the AI chatbot. I guess aviation disasters should now include passengers privacy incidents from hacked booking websites?
I agree about it being a stretch, but we err towards indexing boundary cases. This one turns on whether you would consider the web application as being part of ChatGPT or not. In aviation, there are many systems that are part of safely flying from point to point that are not part of the plane itself. The control tower, the runway markings, and the processes built around these things. While the ChatGPT web application is not the model underlying the ChatGPT front end, it is part of the whole intelligent system.
Edit: On Slack we considering downgrading this one to the second tier status, which is an "issue," but our "issue" definition typically differentiates between events (a harm happened) and non-events (a harm may happen). This doesn't quite fit here. We have robust editor discussions around these ontological problems and are developing the ruleset in response to challenges. We aim for inter-rater reliability via the editor's guide. More details are here https://arxiv.org/abs/2211.10384
It's fine to consider the web app code as part of "ChatGPT".
But you shouldn't say that and also say "ChatGPT is an AI" at the same time. Those are two slightly but importantly different definitions of "ChatGPT".
Looking at your glossary here, the definition of "AI incident" should probably be adjusted, and in its current form it should not be taken too literally. Because if a server fell over on someone, and that server was hosting an AI, that would technically fit your definition.
Great point! We are constantly discussing the criteria. For example, if a robot lost power and fell on someone, do you think that is an incident? What if the robot was never turned on and fell on someone in shipping? When these challenges come up, we try to expand the editing guide to meet the challenge of all the ways intelligence of the artificial variety can go wrong.
I see what you mean but… think of a conventional security issue around an AI like you would a chemical spill. Even though the failure may have been something boring like a bad seal, we could still call it a “hazardous chemical incident” or such.
I get your point, and I partly agree with you. I think this class of incident is at least a little relevant because people are rushing into the AI tool space without thinking about user privacy or security, and also because some of them use AI tooling to develop apps as quickly as possible, without knowing about even basic privacy or security considerations.
It's a bit hard to take any of the aggregate stats seriously when they bucket anything from "AI made a mistake and killed someone" to "AI-Generated-Text-Detection Tools Reported for High Error Rates" into the same "incident count".
Incident severity and scale varies wildly between incidents. We provide the taxonomy feature to support organizations like CSET adding color to incidents that support more descriptive stats.