Last time this happened to someone I know, I pointed out they seemed to be picking the first choice every time.
They said, “Certainly! You’re right I’ve been picking the first choice every time due to biased thinking. I should’ve picked the first choice instead.”
Its worse than this. It doesn't matter if a human understands recency bias, the availability heuristic or the halo effect.
It will still change the decision. It doesn't matter if you "understand" these concepts or not. Or you use some other bias or heuristic to over correct the previous bias or heuristic you think you understand.
This topic people I think tend to confuse outright discrimination with the much more subtle bias and heuristics a human uses for judgement under uncertainty.
The interview process really shows how much closer we are to medieval people than what we believe ourselves to be.
Picking a candidate based on the patterns of chicken guts wouldn't be much less random and might even be more fair.
Which is why, if you have a task like that, you're going to want to use a technique other than going straight down the list if you care about the accuracy of the results.
Pair wise comparison is usually the best but time consuming; keeping a running log of ratings can help counteract the recency bias, etc.
I think any time people say that "LLM's" have this flaw or another, they should also discuss whether humans also have this flaw.
We _know_ that the hiring process is full of biases and mistakes and people making decisions for non rational reasons. Is an LLM more or less biased than a typical human based process?
> Is an LLM more or less biased than a typical human based process
Being biased isn't really the problem
Being able to identify the bias so we can control for it, introduce process to manage it, that's the problem
We have quite a lot of experience with identifying and controlling for human bias at this point and almost zero with identifying and controlling for LLM bias
Thank you for saying this, I agree with your point exactly.
However, instead of using that known human bias to justify pervasive LLM use, which will scale and make everything worse, we either improve LLMs, improve humans, or some combo.
Your point is a good one, but the conclusion often taken from it is a shortcut selfish one biased toward just throwing up our hands and saying "haha humans suck too am I right?", instead of substantial discussion or effort toward actually improving the situation.
Human HR gets training specifically for bias and are at least aware they probably have racial and sexual biases. Even you and I get this training when we start at a company.
That said, to a human also, the order in which candidates are presented to them will psychologically influence their final decision.