IME chats do seem to get "stuck" on elements of the first message sent to it, even if you correct yourself later.
As for gender bias being a reflection of training data, LLMs being likely to reproduce existing biases without being able to go back to a human who made the decision to correct it is a danger that was warned of years ago. Timnit Gebru was right, and now it seems that the increasing use of these systems will mean that the only way to counteract bias will be to measure and correct for disparate impact.
As for gender bias being a reflection of training data, LLMs being likely to reproduce existing biases without being able to go back to a human who made the decision to correct it is a danger that was warned of years ago. Timnit Gebru was right, and now it seems that the increasing use of these systems will mean that the only way to counteract bias will be to measure and correct for disparate impact.