Putting on my Republican hat: it's too expensive. If we just focus on "was there bias in the output" the amount of time and effort required to analyze all the things is going to be super high. A completely-zero-trust economy would have far too many parasitical drains on productivity as we constantly have to prove everything to everyone every time.[0]
Taking off my Republican hat: which is why up-front regulation on specific actions and methods is what we need instead. Much easier for other people to spot, much more of a bright line "you did this thing, we said don't do this thing." Not at the granularity of "don't use ChatGPT specifically" but more like "these are the things we will allow and won't allow in how you process job application background checks" (we already do this for discrimination, I just think we should update it to reflect that we don't want a centralization/standardization of process so that people become de facto unemployable based on what some tool used by some company thinks about them).
[0] not to mention that looking for bias in outputs in a mechanical way is also gonna false-positive a bunch; p-hacking, but for accidentally getting sent to jail :(
Taking off my Republican hat: which is why up-front regulation on specific actions and methods is what we need instead. Much easier for other people to spot, much more of a bright line "you did this thing, we said don't do this thing." Not at the granularity of "don't use ChatGPT specifically" but more like "these are the things we will allow and won't allow in how you process job application background checks" (we already do this for discrimination, I just think we should update it to reflect that we don't want a centralization/standardization of process so that people become de facto unemployable based on what some tool used by some company thinks about them).
[0] not to mention that looking for bias in outputs in a mechanical way is also gonna false-positive a bunch; p-hacking, but for accidentally getting sent to jail :(