Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> experts warn that algorithms could introduce racial or other illegal biases

There was literally no expert quoted in the article who asserted such a thing.

> “It’s kind of chaos,” said Ariel Nelson, a staff attorney at the National Consumer Law Center. “It’s really hard to figure out if you were rejected, why was it rejected. If it’s something you can fix or if it’s an error.”

The rightful criticism is that these systems are opaque and Kafkaesque. One of the few benefits of them is that they are going to be less biased than human review. But I guess it's too boring to assert that something is just generally unfair when you can lazily shoehorn in that it must be racist.



> One of the few benefits of them is that they are going to be less biased than human review.

This isn't necessarily true. The factors used to drive the algorithms are selected by humans. And from those factors it's not always clear if they end up discriminatory. This is exactly why the systems remain opaque.


>> experts warn that algorithms could introduce racial or other illegal biases

> There was literally no expert quoted in the article who asserted such a thing.

This is exceedingly harsh. Look at regulation on the credit card industry on what data you can and cannot use to make credit decisions. Algorithms can certainly have bias - that's not a disputed fact - and without careful screening of the data and the models you will quite likely have biases. So the article isn't unfair in making that statement.


Algorithms have bias', sure.

(1) Do they have more or less bias than the humans performing the current review process?

(2) Is there a way to make their decision making process less opaque than it is now, to reduce the level of bias in their decision making process? - I know there has been some work done in this area (a) though it isn't as fast as many would like to see.

(a) https://link.springer.com/article/10.1007/s11948-020-00276-4


When you put machine learning into the picture sometimes the computer learns things you didn't intend it to. (For example, the presence of a ruler is a sign of skin cancer.) Much evidence of "racism" in this country is actually socioeconomic, but the computer is even more vulnerable than a human to drawing the false conclusion that race is relevant.


There's well-known problems with algorithms introducing racial biases, and renting notably has a number of problems with these already. Calling this out is not a stretch.


Many rejections are opaque to the rejectee, deliberately. Is there any evidence here the system is opaque to the folks who run it? On the surface it seems to be rejecting folks with low credit scores, criminal records, evictions.

if (creditScore < X || criminalRecord || eviction) return REJECT

Which is the same process most landlords go through in their head when they don't have a system.

And, it's life. Didn't get a job? Opaque. Didn't get an apartment? Opaque. Didn't get a yes to a request for a date? Opaque.


The racist angle was introduced because it's the only plausible nexus by which this practice could be illegal.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: