Hacker News new | past | comments | ask | show | jobs | submit login

See, for all the talk of how the GDPR will lead to fines of "4% global turnover" it's been an abysmal failure in this regard.

My understanding is that you're not supposed to be subject to automated decision making (assuming the GDPR applies here).

Yet, what we see is a constant stream of robotic decisions. Although a human did "review" it there is no evidence of substantive human processing, probably just a 60 second "Yup. The computer was right".

Imagine if this was for something more serious, like ETIAS, and the human decision maker just looked at the computers decision, and because they are disgruntled with their salary, within 60 seconds they decide that what the computer decided must be correct. No thought or actual work went into making the human decision.

Technically, I suppose this wasn't automated decision making but without much human thought (which in this case how do we prove or disprove?) it may as well have been.

Something we do need to fix as AI and computers make more and more controlling deicisons and ensuring a human is required to perform a "substantive" review.

I am, however, interested in what that actually means. Do we need a human to produce a report? Do we need to see a liveness test showing the appeal form that they are reviewing and how long they spent looking at all the information?

For now, the human is there for "compliance" but is essentially just clicking a button called "agree with computer".

I would like to see statistics showing how long these "reviews" take, and how many uphold/reverse the robots decision.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: