Well, the purpose of the AI regulation is specifically to hold the "management" accountable when they fail to prevent misuse of the technology. The regulations are to be enacted by the lawmakers. As I see it, the debates like the one we are having right now will determine how the regulation will be shaped eventually.
Yes, but I'm worried about the people component of my "people/tool" equation. If management sees that costly fines occur when AI is used, then maybe they'll abandon it and just use people instead.
However, if they hire a bunch of power hungry sociopaths who are very good at hiding their malicious oppression and who also bring in donuts every Thursday to stay on their bosses good side, then the situation could easily lead to worse outcomes for the people who have to deal with this system.
If we create a computer system that oppresses 1% of innocent people, then that is a problem. However, I don't consider it a win to ban the computer system and replace it with a human system that oppresses 10% of innocent people. Like, the situation isn't better because humans are oppressing humans instead of a computer doing the oppression.
That's why I was focused on management. I don't care that things are going badly for some specific technology related reason. It's management's job to fix it regardless. If management can't rely on the technology for regulatory reasons, then they might rely on people who do just as bad of a job. And hey, that scenario is even better for management because if they hire a bad actor who gets caught then that person faces the consequences and not them.