Hacker News new | past | comments | ask | show | jobs | submit login
He spent 10 days in jail after facial recognition led to arrest of the wrong man (nj.com)
259 points by sharkweek on Dec 28, 2020 | hide | past | favorite | 174 comments



It seems to me this is not a facial recognition story but a story about how the state, with very little evidence, can ruin your life and leave you with very little recourse.

As concerned as I am about state use of facial recognition, these kind of things have been happening to people for decades now. The solution to this problem is to better secure the rights of people accused of crimes.

We need to ensure that people accused do not lose their jobs and homes. That they have fair opportunity to communicate with counsel and family (both are often needlessly limited by DoC). And the state must be responsible to mitigate its mistakes.

Bail requirements are too often granted as par for the course and need a "burden of proof" type rethinking. Holding people at home should be required before incarceration. Lastly we need to remove the power of the legislatures to unequally empower prosecutors and public defense attorneys.


> It seems to me this is not a facial recognition story but a story about how the state, with very little evidence, can ruin your life and leave you with very little recourse.

The facial recognition is relevant, as they relied on facial recognition software, though it is apparently illegal in New Jersey.

From the article as well:

> He asked for a lawyer, then was taken to a hallway where he was handcuffed to a bench. About an hour later, the officers – he counted seven – told him they were going to take him to a different room for more questions.

As far as I understand U.S.A. law, when a suspect ask for a lawyer, the interrogation is to stop immediately, and a lawyer must be provided or he must be allowed to call one, and the lawyer must be præsent ere the interrogation continue.

The real takeaway seems to be that the police did something illegal.

The rest of your post is about many more rules, but the issue is that the rules were broken here.


> As far as I understand U.S.A. law, when a suspect ask for a lawyer, the interrogation is to stop immediately, and a lawyer must be provided or he must be allowed to call one, and the lawyer must be præsent ere the interrogation continue.

They can hold you for a surprisingly long time without a lawyer, and it's very easy to accidently rescind any rights you invoked. To be safe you need the ability to stay silent for 24h while still asserting your rights, possibly longer depending on the jurisdiction, and that's all before youre even charged with anything.

Someone being held for 10 days as a suspect for a crime they didn't commit is unfortunately a benign occurrence for the American legal system. People regularly lose months of their life, held in captivity, awaiting trial.


/s/benign/common/ perhaps. Nothing benign about being held against their will.


>It seems to me this is not a facial recognition story

Maybe there should be consequences if you sell bad software that has bad consequences ? There are a lot of greedy bastards that will knowingly sell bad stuff but minimize the issues, why not have good standards like real professionals and say scientists where they need to prove that the thing they discovered is correct with a big confidence level. With AI there is no proof that it works correctly or the sell department is bullshitting a ton of claims.


I wonder if the US will ever introduce a department like the FDA except for electronics.


The people selling the software aren't the problem, it's the people using the software. Jailing someone for bad reasons should carry bad consequences.


I will disagree, there was a guy here on HN that was promoting his shit software, it had a demo page where it would detect your IP and show you a list of all torrents you downloaded. I checked it and I am 100% sure it was garbage in my case. I am imagining this type of people that maybe have some connections or some money to get this shit software sold and innocent people getting threaten that they downloaded copyrighted or illegal stuff, that would cause a lot of suffering and drama in a family.

I hope you do not make the point that the seller has the "capitalistic obligation" to misrepresent his shit and the buyers have the duty to triple check.

There should be strong laws, like if your software promised something that is not true in real world situation then the people that sold it should pay, no loopholes.


Or just take the German approach and reimburse people who aren't convicted as well as guarantee everything else you would lose in the US in that event as well. Which is entirely why their court system is far more choosey over who they take to court. Unlike the US which is basically a lottery where the odds are much much higher than any powerball and have terrible consequences if you win.


> a story about how the state, with very little evidence, can ruin your life and leave you with very little recourse.

Buttle? Tuttle?

https://vimeo.com/224785800


That's nothing: Steven Talley was identified by the FBI as the primary suspect in two bank robberies using a facial recognition algorithm. He had an iron-clad alibi, but the police and FBI weren't convinced. In court one of the bank tellers said Talley definitely wasn't the robber. Nonetheless Talley lost his job, his wife and his family and was held in prison for months:

"LOSING FACE: How a Facial Recognition Mismatch Can Ruin Your Life"

https://theintercept.com/2016/10/13/how-a-facial-recognition...

FTFA:

"Talley said he was held for nearly two months in a maximum security pod and was released only after his public defender obtained his employer’s surveillance records. In a time-stamped audio recording from 11:12 a.m. on the day of the May robbery, Talley could be heard at his desk trying to sell mutual funds to a potential client."

Today Talley is still trying to claw his way back to normalcy.

In the outstanding book "Hello World" Hannah Fry examines Talley's story as part of a chapter on crime, AI and facial recognition. Reading Fry's book convinced me that facial recognition software simply does not work well enough to use in police work. As Fry says:

"If you're searching for a particular criminal in digital line-up of millions...the best-case scenario is that you won't find the right person one in six times...". That is not nearly good enough for law enforcement and the courts.

"Hello World" by Hannah Fry

https://www.amazon.com/Hello-World-Hannah-Fry/dp/0857525255


Again, how is that story a failure of AI/face recognition?

At some point they even have a human compare the likeness, and the human also concludes it is the same person.

The article even features the sentence "teve Talley is hardly the first person to be arrested for the errors of a forensic evaluation."

And yet people seem to be hellbent to make it about AI.


> And yet people seem to be hellbent to make it about AI.

Of course, because the alternative is to make it a human failure of critical actors in the justice system. Every story about mistaken identity has dozens of points where humans, police officers and magistrates, made a judgement call ... wrongly. And always obviously wrong.

That is the case here and the case in the linked article. You can clearly see the eyes don't match, the jaw is different, ... and in Steve's case you see the face is too small, his nose is much narrower than the robber's, the hairline doesn't match, Steve has a square face the robber doesn't, his body shape (the shoulders) are very different, and Steve's ears are almost pointy, whereas the robber's are much more rounded, and much shorter than Steve's. It is not a reasonable mistake for the 10-15 people involved to make.

In other words, the inevitable conclusion is that:

1) the police and prosecutor as well as at least 1 judge knew they had the wrong guy

2) they all cooperated to use their power to extract a wrongful confession from the guy, including that judge

3) they used "testimony" from someone with a clear grudge against him without question

4) which additionally was not reasonable given the bad quality images

5) they refused to believe testimony when it disagreed with their working hypothesis

6) instead, they used psychological torture to force a confession from an innocent

7) essentially, they refused to set a person free without being offered another victim

Clearly at the very least they've shown they, both as an organisation and all individuals in it, would much rather wrongfully convict an innocent person than to be left without suspects or traces. They weren't protecting the bank, society, or anyone, they were VERY clearly abusing and violating the law to protect themselves from embarrassment.

People just can't deal with this. That police just fight to get someone, anyone, convicted at all costs rather than making damn sure they got the right guy, as the law demands. That they do this disregarding all costs to the suspect and society is not something most people are willing to consider ...


It’s a failure in the sense that people put too much confidence in this kind of algorithm and put them above any eye witness when they should just be considered as another element.


Also it is not merely a failure of people (in putting too much confidence in to these algorithms). It is also often actively marketed as such (fine print about culpability not withstanding).


>> Again, how is that story a failure of AI/face recognition?

Because AI identified the wrong man. That a human also did, does not make the identification by AI any less wrong.


I would expect algorithms like that to put out likelihoods, not hard identifications. All it does is say "this person looks like that person". I wouldn't read that as "wrong".


Facial recognition systems are image classifiers where the classes are persons, represented as sets of images of their faces. Each person is assigned a numerical id as a class label and classification means that the system matches an image to a label.

Such systems are used in one of two modes, verification or identification.

Verificatiom means that the system is given as input an image and a class label and outputs positive if the input matches the label, and negative otherwise.

Identification means that the system is given as input an image and outputs a class label.

In either case, the system may not directly return a single label, but a set of labels each associated to a real-valued number, interpreted (by the operators of the system) as a likelihood. However, in that case the system has a threshold delimiting positive from negative identifications. That is, if the likelihood that the system assigns to a classification is above the threshold, that is considered a "positive identificetion", etc.

In other words, yes, a system that outputs a continuous distribution over classes representing sets of images of peoples' faces can still be "wrong".

Think about it this way: if a system could only ever signal uncertainty, how could we use it to make decisions?


Similar to the way you could look at guns and cigarettes. 'Guns don't kill', 'It's your own responsibility', etc.


If people are curious about Talley's civil case, this should provide a place to get updates - or at least the case number to look on your own. https://www.docketbird.com/court-cases/Talley-v-USA-et-al/co...

Legal back-and-forth continues as of 12/14/2020


The problem appears to be that police and prosecutors are either morons or incentivized so strongly to arrest and convict that they ignore obvious nonsense.

Given that this is the case, the use of AI cannot be accepted by any members of the US justice system. If I had access to a facial recognition system, I would use it as a whittling down tool, to reduce the amount of hay to find the needle in. But they clearly see a straw and conclude that it is a needle.

It's like giving a four year old a handgun. It's just not responsible. They're just not as good at this as I am. So no handgun for you.

It's a pity. Ideally you should be able to discriminate, and intelligent police should be given the tool, and stupid police not. However, because stupid police and smart police exist in an emulsion, it is better to just waste the time of the smart police than to give the stupid police too powerful a tool.


When the state doesn't have to foot the bill taking you to court, of course they don't care. If the procsecutor had to pay all your legal fees if they lost, we'd either get some pretty amazing state prosecutors or ones who who only accept evidence from cops that can convince a judge.

It's almost as if you stop cops from just locking people up it creates an immense snowball effect on the whole system...


> If I had access to a facial recognition system, I would use it as a whittling down tool, to reduce the amount of hay to find the needle in. But they clearly see a straw and conclude that it is a needle.

It's like giving a four year old a handgun. It's just not responsible. They're just not as good at this as I am. So no handgun for you.

Just wanted to say I've never heard such an argument worded so well. What is the course of action for the common man other than verbalizing such thoughts to local city councils and law enforcement?


I think the local powers have figured out the route that will already succeed: blame the tool. That way permits everyone to leave with their egos intact. The Art of War 7.36 in action, one might say.


Perhaps USians as desensitized to guns but I took pause when reading that the police officers drew their guns when chasing the fugitive in the original incident, about shoplifting candies from a hotel lounge? (And I wonder if the concierge that called the cops on this guy hadn’t provoked a taunt with some “kind words”.)


Americans are, as a rule, highly authoritarian. You'll see it online as people use pseudo-bureaucratic corpo-speak to justify state violence: "refusal to obey a lawful order", "did not comply with a lawful request", "in violation of state regulation" etc.

Having lived elsewhere and in the US (a country I practically adore), I suspect it is because Americans have not really had true authoritarianism in any sense at any point in their history. Every time they stray close to the darkness and walk away unscathed and it convinces them that they can never really be consumed by it so they're willing to walk its very edge. Perhaps the UK having a monarch and a state religion is a visible reminder of what a monstrosity the state can be.

It's funny that both nations have populations that support the massive expansion of the state into human affairs: it's just that the Americans want it to enforce behaviour and the British want it to redistribute wealth.

The most amusing manifestation of the American comfort with violence is the difference in how streakers are handled in British football games and American football games. The former have overweight stewards lumber after naked people, failing repeatedly to apprehend, eventually accompanying them off the field. The latter have large, muscular guards catch up to and violently tackle the streaker to the ground.


> Americans have not really had true authoritarianism in any sense at any point in their histor

That is not really true. American history is constant struggle between authoritarianism and opposition.


It was about 10 days after the incident that he was in custody. What did the 3 eye witnesses say? Did they make positive identifications? Did the officers involved in the initial incident claim he was the same person? Were any of the 7 officers the same ones who encountered him in the hotel? How about the hotel staffer? Did they make a positive ID? Was 'Jamal' a hotel guest?

Seems like there should be a discussion about all of this in the article.

Facial recognition as the sole evidence must not be used to make arrests.


Eye witness identifications are very likely worse than facial recognition. There should be some other evidence than either of those.


The problem is the police not the technology.

Blaming the technology excuses these awful, incompetent police who will just screw up in a low tech way.

Once the computer flagged a suspect a basic investigation should have ruled out or confirmed the suspect.


Absolutely right. They need to have some skin in the game, or otherwise be held accountable if their actions cause harm.

That can't be solved with a software update.


That part boggled my mind. He had someone who could prove he was not who they were looking for. How the hell can the system just go ahead and charge you for something even though there is perfect evidence stating you are not who they are looking for!?


> The problem is the police not the technology.

It's worth pointing out that the technology isn't capable of what people think it is capable of.


I don't see much explaining of the capabilities of AI in the article.


Pointing out a colossal failing of ML based facial recognition is the point of the article. Without that failure, there is no story here.


Where exactly is the failure? It identified a person who looks similar to the person in the drivers license? It could even actually be the same person, given that the drivers license is assumed to be fake (so the fake could use the photograph of the suspect).


How many people have been arrested historically with just a plain ol' case of mistaken identity (i.e. they just looked alike).

Seems like we'd need to know the ambient rate to tell if we're really regressing.


This went on for a year before a judge finally told the prosecutor's office they need actual evidence beyond facial recognition and they subsequently dropped the charges.


I think the shock comes from the perception that facial recognition should have 0 false positives. Everyone would agree that it's better.


No, the expectation of false positives should mean that a positive match is not enough to keep somebody locked up for multiple days.


Specially if it’s someone who voluntarily turned himself in in the first place.


They should add this to the "In The News" section of clearview.ai's website.


LOL. I love this idea.


Does Clearview AI train on mugshot datasets specifically?

I'm absolutely NOT advocating for its use at all, but it seems like that might be a good source for well-tagged, organized data for what is turning out to be a very problematic (though apparently allowed) use case.

From a photography standpoint, it seems to follow that identical lighting in the same room where they're taking everyone's mugshots could produce different levels of contrast for facial features across different skin tones, simply as an artifact of cheap lenses and a lack of white-balancing or overall care, really. If persons A and B have different skin tones, then the routine, careless, terrible, one-size-fits-all black-and-white photo by an underpaid government office worker may not accurately capture the contrast in the shadows of both of their facial features to the same degree, no futile human bias required.

This lack of definition may then further help promote any existing biases in ML training, enforcement, etc. towards people whose features aren't as well contrasted in the resultant, awful photos. Perhaps training on such a grainy, washed-out dataset would at least help the ML distinguish smaller variances in contrast to a finer degree, if nothing else. Us humans can do it, after all.


> Does Clearview AI train on mugshot datasets specifically?

https://www.theverge.com/2020/2/6/21126063/facebook-clearvie...

* "Facebook and LinkedIn are latest to demand Clearview stop scraping images for facial recognition tech. Twitter and YouTube have also objected."


Perhaps it's just my schadenfreude acting up but I pray to sweet baby Jesus that they try to bust somebody with this. I'd love to see a whole bunch of engineers from Clearview AI get a proper shellacking on the stand from a criminal defense attorney. I don't see how anything from this product could be used as evidence without the algorithms being questioned or exposed in court.


That's not what would happen, though. The police will use this tech on poor people who can't afford their own attorneys, and public defenders are too overworked, underpaid, and/or cynical to try to push back against the system. On the off chance that a rich person gets falsely arrested due to this tech the prosecution will probably immediately settle out of court to avoid scrutiny. This same scenario has played out over the last decade or so with the Stingray device[1] in particular and various forms of warrantless (or essentially warrantless) wiretapping in general. Even if facial recognition tech is officially banned or restricted, it will probably be used secretly and covered up using parallel construction[2].

[1]https://en.wikipedia.org/wiki/Stingray_phone_tracker [2]https://en.wikipedia.org/wiki/Parallel_construction


The facial recognition is likely 'only' used as a pursuit/detection mechanism. Presumably/hopefully when making a case in court they use traditional evidence like eyewitness identification, DNA, etc.

Novel evidence like this is probably unlikely to do well in court for the reasons you describe.


In the specific instance described in the article, the police and prosecutor failed to produce any more evidence than the facial recognition match.

Nevertheless, they imprisoned a man for ten days, and proceeded to press for criminal charges over the course of the year. As described the man had to spend his entire savings on this defense.

The court system worked correctly, in that the judge told the prosecutor to produce more evidence. The charges were dismissed.

So we have the "happy" ending of a man whose life has been turned upside down, whose financial situation has been ruined, and who has had to spend his time and effort defending himself rather than contributing to society in a positive way. All because facial recognition was used as the only piece of evidence.


How many people have these engineers been arresting? This is news to me. I'm shocked!! They don't have the power to arrest people. How is this being allowed?!


Just out of curiosity - if the AI software relies on utilizing images scraped from the web, could one conceivably hit the developer with copyright infringement charges?

Or would this be fair use?


It's illegal. Prior authorization _should_ be required using someone's copyrighted work . In this case, the AI firm that compiled the profile of this poor soul probably had not reached out and asked for authorization for use of his image in a vast database which is then productized and sold to law enforcement agencies.

Now, things get complicated when you involve firms which, in the process of signing up a user, might force an agreement to sell/market the data in a non-exclusive manner to "trusted partners". Sooooo...is it illegal? Depends on on the lawyer and how long you can fund a lawsuit for, I suppose.


We do not actually have precedence of this being illegal yet. You are not using the image in creation of a work (in which case, you need permission), but using it to train an AI model, which we currently have no legal indication is not allowed, even if the image is copyrighted.


Could you explain to me why you don't consider Clearwater's product to be a work in this regard? It seems plausible to me that someone could infringe copyright by incorporating copyrighted works into a machine learning model


I don't have much of an opinion myself on it, I'm just trying to state that so far, this appears to be the status quo. Here is some recent additional information on it compiled by Gwern: https://www.gwern.net/Faces#copyright (scroll up just above this link):

>Mod­els in gen­eral are gen­er­ally con­sid­ered “trans­for­ma­tive works” and the copy­right own­ers of what­ever data the model was trained on have no copy­right on the mod­el. (The fact that the datasets or in­puts are copy­righted is ir­rel­e­vant, as train­ing on them is uni­ver­sally con­sid­ered fair use and trans­for­ma­tive, sim­i­lar to artists or search en­gi­nes; see the fur­ther read­ing.) The model is copy­righted to whomever cre­ated it.


The stance from Clearview AI (henceforth known as "privacy rapists") is that anything put publicly is 'fair game' for their use - I don't think that stance has been fully challenged yet.

What I'd be interested to know is if they have pictures of minors in their database, which could theoretically require some sort of release for them to collect/use (which I'm quite sure they wouldn't have).


https://www.theverge.com/2020/2/6/21126063/facebook-clearvie...

https://slate.com/technology/2020/02/youtube-linkedin-and-ot...

* "Twitter sent a cease-and-desist letter to the company ordering it to stop mining the social media platform’s data and delete anything it had already collected."

* "YouTube, and Venmo sent their own cease-and-desist letters"

* "Here’s a LinkedIn spokesperson on Clearview AI: “We are sending a cease & desist letter to Clearview AI. The scraping of member information is not allowed under our terms of service and we take action to protect our members.”"

* "Facebook notably has not sent a formal cease-and-desist letter but claims to have sent other letters to Clearview to request more detail on its practices and then eventually “demanded” that it stop scraping user data. Peter Thiel, a venture capitalist and notable surveillance enthusiast who sits on Facebook’s board of directors, invested $200,000 in Clearview’s first round of funding."


I wish I could say that I was shocked that the victim of bad facial recognition is black. I wish I could say that I'm shocked that authorities take the word of software over looking at the evidence and thinking, "umm, doesn't look like him..." Or perhaps someone did look and said, "meh, they all look the same to me."

All I know is that I'm highly skeptical that this white guy would suffer the same fate, even if I were to have a similar criminal record. Facial recognition seems to be the polygraph for a new century. But unlike a polygraph, it mostly false positives only for dark-skinned people. While I'm not quite ready to allow my cynicism to let me think it is an intentional feature, I question how many are demanding a fix.


The last thing police need are more tools that don't work and they don't understand how to use. That they still use polygraphs should make everyone's hair stand on end.


As I understand polygraphs are used like placebos/intimidation tactic to get the suspects to spill the beans. As far as I know the polygraph isn't accepted as evidence in court unless all parties agree to it.

However, the dirtiest thing is to use all kind useless evidence and intimidation techniques to scare otherwise innocent people into taking plea deals. This whole guilty plea saga is a disgrace to the justice system.


Correct. It's widely considered inadmissible as evidence in court. They're also rarely used outside of the U.S. AFAIK.

Recordings of interrogations that feature them also show that they tend to come along with other appeals to ethos ("this is doctor so-and-so...") and hand-wavey forms of intimidation like touting their "seriousness" or "efficacy".

Anything that requires a person to "interpret" its output is likely not very deterministic, and so likely not very reliable for establishing... anything. This is akin to palm reading or graphology.

Unfortunately, revealing to the police in such a situation exactly how foolish they look by getting behind such a device isn't exactly advisable either!


The Penn and Teller series, Bullshit, features an episode on polygraph machines and interviewer techniques. It's a good watch.


> As far as I know the polygraph isn't accepted as evidence in court unless all parties agree to it

That's not the only place where they can be used, and cause damage.


A failed polygraph while inadmissible in court will still be used by police during their own investigations to confirm or deny their "gut feeling".


I failed a polygraph. There was a crime at a store, and my name came up when they searched through the ATM records. Police had ample evidence that I was in no way related to the crime, but kept trying to find something to pin on me because of the failed test. Those particular detectives had complete faith in the lie detector as a valid tool of police work.

Video evidence of me entering the store, using the atm, and immediately leaving the store? That was not evidence to them because I failed the polygraph.

I take an exceedingly dim view of any police department that uses a polygraph in any capacity. They've been proven time and time and time again to be worthless as a tool, but police still use them. The rough equivalent is using tarot cards as a tool for solving crimes.


You assume they wanted truth when in reality they only want confession.


There are many things in law enforcement that have either been proven to be worthless, near worthless or never been proven to be valid...

Eye Witness Testimony for example, aside from the fact the people are terrible at recall anyway and High Stress causes memory problems above and beyond the normal memory issues of the human mind, it is very very very easy to manipulate a victim either directly or indirectly to identify the "target" no necessarily the actual criminal

police "drug" dogs are another one, it is more or less a blank check to probable cause for a search.

Then there is one people never even question because it is the foundation of the legal system, the good old finger print... It always amazes me that with DNA it is always probabilistic, "1 in 4 billion people will have this same DNA" but with Finger Prints is an absolute, Match no match.. Call me crazy but I don't think that position is scientifically valid


> the dirtiest thing is to use all kind useless evidence and intimidation techniques to scare otherwise innocent people into taking plea deals

They don't even need to do that. They can just lie to you. "Hey, your buddy just totally ratted you out in the other room, you better tell us your side of it if you don't want to take the rap for everything"


Even worse, they can pretend they have evidence that they don't, pretend they believe the evidence, and pretend it's enough to convict you by itself.

They can tell you to just say you were there, and they'll go easy on you. You lie and say you were there, now they have evidence that can actually lock you up, even if you are a random person off the street.


To a degree, but I also think this will be changing over time as more and more innocent people are found to have been convicted on false statements used through coercive lying tactics by the police.

For example, in this case police threatened to arrest the mans wife if he didn’t confess to killing his child (a new trial was ordered): https://brooklyneagle.com/articles/2014/02/20/police-can-lie...

But they went on to say it was still acceptable to lie to suspects... I imagine this murky area of the law will eventually say that police cannot lie to suspects. The Central Park Five is mentioned in that article and is a blatant example of police lying to coerce testimony and then putting innocent children in prison.


Even Worse than all of that, they can use all of those things, while threatening you with dozens of charges and decades of imprisonment then say "but if you act right now and sign this we will only give you a couple of years and then you will be free"

in any other context these tactics themselves would be illegal, and i am firm believer that you can not enforce the law by breaking, and that the police should NEVER how the power to do things the public can not



> All I know is that I'm highly skeptical that this white guy would suffer the same fate, even if I were to have a similar criminal record.

It is not clear to me that a prosecutor's office which has demonstrated that it is not interested in justice, would have cared about the color of skin in the beginning. They're after low-hanging fruit and maybe establishing facial recognition as 'proof enough'. A poor, white ex-con who can't afford a lawyer might just take a plea deal. I think the point of possible divergence is when the guy lawyer-ed up.


> I'm not quite ready to allow my cynicism to let me think it is an intentional feature

Hanlon's razor is hinted at in the later discussion, i.e.

"never attribute to malice that which is adequately explained by incompetence" (at least, both the words incompetence and malice appear under discussion)

But, I prefer to apply a different rule-of-thumb:

"incompetence is strongly equivalent to malice"


If you want variety, iPhone had some trouble with Chinese.

https://nypost.com/2017/12/21/chinese-users-claim-iphone-x-f...


Ugh. You would think Apple would do more testing in one of their largest markets, and the place where iPhones are literally built.


> it mostly false positives only for dark-skinned people.

Dunno what the technical reason is, but if there was one, I bet it went something like this:

v1: sales 'engineer' says: "Our FR product works great on light skin but terribly on darker skin so we've largely disabled that" and made zero sales.

On v2 (version2 or vendor2), they just dialed up the acceptable false positive rate, said "ya, it works on everyone, no problem!" and sold sold sold!


Part of the reason is likely that black people, particularly dark skinned black people, are just harder to photograph than people of other ethnicities: https://www.npr.org/sections/codeswitch/2014/04/16/303721251...

Edit: I'm placing this higher up in the thread for more visibility, but, for those interested, here are some tips for photographing people with darker skin tones: https://creativecloud.adobe.com/discover/article/10-tips-for...


> Part of the reason is likely that black people, particularly dark skinned black people, are just harder to photograph than people of other ethnicities

If that were true, that would just mean photography-based systems are currently not good enough to work across any population that includes dark skinned people. Society should not, and cannot collectively wrings hands and say "Oh, well, it's really because of the photons" and go ahead make use of a provably unjust technology.

Thankfully, modern phone cameras can take great high dynamic range photos as well as low-light/night pictures - I'm confident to say capturing dark-skin is a much easier bar to clear (dynamic range can be much smaller if you only care for a person's face). The fact that good-enough technology is not used (or mandated), and the solutions providers & their clients decide on cheap(er) sensors with exposures that only work well with light skin tones... well, that tells a story by itself, doesn't it?


Oh come on! “They” are harder to photograph? This is exactly the same like saying black people are less intelligent because they don’t hold the higher paid jobs. It’s not by choice. Nor is there any physical reason why that should be.

The technology to photograph is biased to better capture white skinned people.

It even says so in the very article you link to:

“A lot of [the design of film and motion technology] was conceived with the idea of the best representation of white people.”

Improve the technology!


Given that the available equipment is what's available, saying that the equipment is biased versus that black people are harder to photograph is a distinction without a difference. I've literally experienced this when I saw how a film photo of one of my friends come out. He isn't super dark skinned, but he is dark enough that this particular film tinted him green! You can't tell me that was the photographer's fault, can you? Was it my friend's fault? No, of course not, and I never said anything to the contrary.

I also resent your implication that I meant anything other than that. I never claimed it was the fault of black people, or that there was anything any individual black person could do about it. I think you should check your bias here.


You have to frame your statements correctly. It's not they who are hard to photograph. It's the hardware which has a hard time photographing them.

There is a deeply prevailing western misconception that systems that work for white people should also work for everyone else. When it invariably doesn't, it's always the fault of the minority. Funny how that is.


Again, that is a distinction without a difference. A photographer certainly can compensate for skin tone versus how the final result will be rendering, for instance, by using lighting, provided the scene is under the photographer's control. (Edit: see my comment further upthread where I pasted in a link about this very thing.) But, some of those methods have the disadvantage that by color correcting for skin tone, one automatically distorts the color in other parts of the image.

This is less true in digital photography, given the level of control over post processing one has. However, the end result is the same: it take more effort to produce a faithful photo of a black person than a white person, for reasons completely out of the control of either the photographer or the subject.

That is all that is meant by the statement I made. Indeed, it is literally the entire content of the statement I made. I would defy you to quote where I blamed black people, or even implied there was even anything any black person could do about this unfortunate reality.

Edit: Incidentally, literally everything I've said here about the difficulties of faithfully photographing people applies to darker skinned people of any ethnicity, such as Indians or Afro-Arab people. None of this is racist in the sense that it implies anyone is lesser than anyone else, just that that's the way current technology works.


pmiller2's intent was clear, he was obviously commenting on limitations of camera technology, not suggesting that black people intent nor choose(?!?) to be hard to photograph. dstick's response is maliciously uncharitable at best.


Thank you, as well, for recognizing what I was saying, and that, while I made statements involving race, there was no malicious racism behind any of it.


> This is exactly the same like saying ..

No it's not.

Shadows are harder to notice on black skin for example.

And it's important to improve the tech and try to make it work fine for everyone

But then you'll need to first realize that by default the tech is sometimes going to be broken for a group of people.

Otherwise how can you be aware about that there's a problem to solve.


Thank you. I appreciate that you recognize what I wrote here versus what some people apparently want to read into it.

And, just to continue the anecdote where the camera turned my friend green, y'all know it's not like anybody said "Hey, $FRINEDS_NAME, can you just be less black, or something?" In fact, what we did was all laugh together (including our friend, the subject of the photo), joke a little, and, IIRC, take a digital portrait.


That's fine, but the system shouldn't be dialed up to accept more false positive matches, but it sounds like it is being so.


Oh, I agree 100%. I'm just saying that maybe the quality of input to the algorithms leaves something to be desired.

Or, maybe the fact that there are relatively few black people in tech means that the developers of these algorithms don't even notice that there's a bias to correct for, whether it's in the input or not.


My knowledge about the matter is almost certainly out of date, but when I was working on image recognition over a decade ago the process I went through was like this: translate image to grayscale, run an edge detection algorithm, and then look for certain features. Lighting and dark colors had a big effect on the detection quality, and the whole thing wasn't very robust. I can see how skin color could be an issue if there isn't enough contrast in the image to identify features.


Are we looking at the same photos? They look uncannily alike to me, to the point where I'm left wondering if the forged license actually reuses a photo from Parks. (As a double convict in NJ, I suspect his earlier mugshots would not be hard for the forger to get. Certainly no harder than forging a license.)


> They look uncannily alike to me

uncannily ? The images aren't exactly high-res, but that's definitely two different guys. The first two photos (color one at the top and b/w mugshot pair) are the same guy, but it should hopefully be obvious that the guy in the drivers license photo is someone else. Especially to the police, since we're comparing low-res scans while the feds may have had a larger copy potentially in-hand when looking at the guy they arrested.

You may just be looking at the wrong photos from the article - in which case yes, that guy looks uncannily like himself!

The subtleties of the human face that change from person to person are things like cheekbone position, jawline, head shape, eye shape/size/placement, etc. They're what enable someone to "recognize" you in a human-to-human context, sometimes even with a facemask on. They're part of how you distinguish yourself in the mirror from other humans also.

However, if a significant portion of one race is indistinguishable for a given observer, they haven't done enough observation. Certainly not enough to arrest someone based on their "observations". A relative inability to identify facial subtleties in member of other races is a known phenomenon and has been studied - incidentally making it the perfect candidate for law enforcement training. Seems that wasn't the case here in TFA, unfortunately.

For reference, I am a different race from these guys, and my SO is a different one still, but I don't think it takes a particularly keen observer to distinguish between the two mens' faces.


Correct. It’s quite clear these are different guys even with the low-res images. So maybe OP was comparing the wrong photos.


Simply put, having a suspect who looks like someone who committed a crime should never be enough to make an arrest. People look like other people. There has to be more to suspect someone than just what their face looks like.

If the police make an arrest based on facial recognition alone they have essentially given up doing police work and are just following suggestions that a computer makes. If that's what society chooses to use for justice then fire all the police officers and employ some minimum wage workers to do the job instead.


Indeed, let me present the case of William West and William West: http://dh.dickinson.edu/digitalmuseum/exhibit-artifact/babes...

Those two men truly do look a lot alike, unlike the two men referenced in the article. Even so, there's no way looking like someone should imply anything at all. After all, I bear somewhat of a resemblance to a "moderately well known, but only a real fan would recognize him on the street"-level actor, and I'd really hate to get picked up if he decided to go off and do some cocaine or something.


"Uncannily"? I don't see it. Different mouth proportions compared to the nose, different cheekbones, different eyebrow lengths...


it would help if the article put them next to each other. Then yea i think it's definitely apparent they are different people: https://imgur.com/a/zJJNSFG

I would hope police are better at this sort of thing then me.

Also i would hope better quality images would be used, these are terrible quality.


if you look at the facial structure, they're completely different people. I don't understand how somebody would mix them up...


Nearly every feature of the face is different. Is there someone who works in AI imaging that can answer: 1. How this could happen 2. Why it's gone unsolved 3. Why this kind of tech is even attractive?


Yeah. For one thing, the actual suspect's face is much wider than the man who was falsely accused. And that's just at a glance. I'm sure there are multiple other points of difference.


Yeah, even with the low res images it’s clear these guys aren’t the same. They’re not even the same complexion with the guy in the license photo being several shades darker. (The darkness isn’t due to the low quality of the image.)

Are you sure you’re not comparing the wrong photos? I’m confident if you saw these two men stand side-by-side, you would not say that they look uncannily alike, unless you’re visually impaired.

I agree with someone else’s comments about the importance of observation. My brother and I worked at the same startup. I was full-time. He was an intern. After several weeks, another intern was some how confused and thought my brother and I were the exact same person. I mean sure, siblings do look alike. But some how she didn’t notice that one of us has light brown skin and the other has dark brown skin, one of us is a whole 3 inches shorter, one of us is a whole 40 pounds heavier, one of us wore glasses and the other didn’t, one of us has a full head of hair while the other one is balding in several spots with a receding hairline, etc you already see where I’m going with this. Yet for 3 whole weeks, she came to work M-F and thought we were the same person? It was an open office environment and we all sat in one large room.

<INSERT REASONABLE STATEMENT ABOUT TIMNIT GEBRU AND THE IMPORTANCE OF ETHICAL AI BUT GET DOWNVOTED TO OBLIVION>


[flagged]


Hey, I understand how activating internet threads can be, and I get from your comment upthread (https://news.ycombinator.com/item?id=25564255) that you have a personal connection to this topic which most commenters don't. I respect that a lot, and the fact that you were willing to share some of your experience in your GP comment was a really positive contribution. I know that's not easy, and you're under no obligation to do it, but FWIW, we badly need more of that.

Please don't post like this comment here, though. It only hurts and makes things even worse.

The thing that helps is to share some of what you know and some of your experience, so others can learn. A small minority of commenters is going to react badly, repeat stock objections, downvote etc., and respond with flamebait. But most of those comments will eventually get downvoted, flagged, and/or moderated. That's the best we can hope for on a public forum where anyone can easily create an account.

More importantly, the rest of us are genuinely interested. If you would please follow the site guidelines by ignoring the flamebait (other than to flag or downvote it, and email us at hn@ycombinator.com in egregious cases), and by sharing information with curious readers when you're so inclined, we'd be grateful.

https://news.ycombinator.com/newsguidelines.html


Thanks, dang@. Although I wish that you and others affiliated with HN would just explicitly call out the racism and sexism that exists on this site, instead of telling me I’m just making things worse. I don’t mind doing the work for you. But I already know your team won’t. PG wouldn’t allow it anyway...

However, I don’t agree with you that it’s a small minority doing these things. I’ve been reading (and commenting) on HN daily since 2010 (under different accounts), so it’s very difficult for me to accept what you’re saying as fact.

Also, did you see my other comment about how I feel about the guidelines lol? (It might have been on a different article.) No offense, but I’ve seen your replies on other articles where you ask people to follow the guidelines and my immediate response is always something like “But why? What this person said makes sense.” For example, the whole “don’t tell people they didn’t read the article” rule...

I don’t mind being flamed by the “small minority” of racist and sexist commenters here. I’m used to it. I’m also used to people expecting me to “behave” myself.

Anyway, based on facts, Timnit Gebru is right. Jeff Dean lied (possibly unintentionally) through omission. Megan Kochalia should be fired.


Thanks for this. I didn't see your other comment about the guidelines. If you have a link to it, I'd be interested to read it.

I'm not asking you to "behave yourself". I don't look at your comments that way at all. I'm asking you to treat HN as a community that you're part of.

I think you're underestimating the good faith in readers here. It is an unfortunate consequence of HN's non-siloed structure (i.e. there's no filtering or selection of who one's subscribed to—just one big room) that bad actors, even when they are a small minority, create an overwhelming and shocking experience of the community being that way. It is not objectively true. Actually the opposite is true: this community is, for a place that includes millions of people, actually relatively functional. (Note that word 'relatively'. Obviously it's still dysfunctional. The question is how do we nudge it into less dysfunction.) I've written about this here if anyone cares: https://news.ycombinator.com/item?id=23308098.

Re making things worse: there are, unfortunately, many ways to make HN worse. I'm not saying you're doing all of them, just one of them. You posted https://news.ycombinator.com/item?id=25565642 5 times—that's obviously not cool, regardless of how badly you were provoked. When I say that, it is in no way a defense of bad actors. I'm just trying to prevent this place from degenerating deeper into hell, or at least to slow hell down a bit.


I wasn’t provoked though. I literally asked to be downvoted in the GP and then when I was (as expected because of course HN is full of more racist and sexist people than you’re willing to admit) I copied/pasted the same comment and changed numbers while waiting for my builds to finish. It’s not as serious as you keep alluding to (i.e. “I understand Internet posts can be activating.”)

So like I said, I don’t mind calling out racist and sexist people on HN. You should try it some time. I already told you what you could do to nudge this community in the right direction. But again, I already said PG wouldn’t allow this.

Is there any data to back up your claim about this “small minority”? If I run a sentiment analysis on the most “charged” posts, will the analysis indicate that there’s a “small minority” of racist and sexist people on HN?

Sure, no one’s going to be racist or sexist on the millionth hate post about Electron or AMP. I understand that. Is that all you’re saying?

Have you seen how toxic the Blind app community is? There’s an overlap between that community and this one. Maybe it’s just the exact same “small minority” everywhere... (Sarcasm)

FWIW, to be clear, it’s not that I lack faith in only HN readers. My ideas/truths about the world extend past this community. HN comments are just a symptom of something much larger: a diseased system being upheld by people who just want to see the good in everyone and vomit rainbows.


Ok. I was trying to give you the benefit of the doubt, but if you were just casually trolling the thread rather than feeling hurt by something that came up, that's worse, and the kind of thing we will ban an account for if it happens again.

You're completely welcome here as long as you want to participate as a community member and use the site in the intended spirit—which, by the way, has nothing to do with "behaving oneself", "just seeing the good in everyone", or "vomiting rainbows". It has to do simply with wanting an online discussion forum that's actually interesting, that has some value for curiosity, as opposed to melting down into a circle of hell. Comments like https://news.ycombinator.com/item?id=25565642 are not interesting and encourage worse from others. This is the way that internet communities poison themselves. We're trying to stave that off, at least for a while (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). The internet call-out culture where people shame and abuse each other is not sustainable here.

I haven't seen Blind, only a few mentions here and there. When you say "there's an overlap between that community and this one", you're assuming your conclusion. I don't see any reason to believe there's much overlap.

If you want your ideas about how to "nudge this community in the right direction" to be taken seriously, one thing you shouldn't be doing is posting in a way that is utterly destructive of the community. That can't possibly be the right direction. Places like this are fragile and we need to work together to prevent this one from becoming hell. Scorched earth is not interesting, and if HN turns into that, what good does it do anyone, or the world? It's bad enough that racism, sexism, and other bad things appear here—indeed it's awful, and it's also inevitable, because anything that exists in society is going to show up in a large, open internet forum like HN. If you think we don't care about that, you couldn't be more wrong. But that doesn't make it good to burn the commons to the ground, which is unfortunately what most of your recent posts point towards.

I'm not sure why you bring up PG, but he hasn't had anything to do with operating HN for many years.


Dude, like I said I’ve been commenting on HN under different accounts for years. Don’t judge me by this one interaction... Keep defending the racists and sexists.

You’re also disregarding the fact that I left several non-trollish comments on that post. It’s ok to call people out.


I also feel like your own reply to me shows that not even you can rise above it. I’ve never seen you leave comments like this before. I’ve definitely seen quite a few people who deserve it. I’m disappointed.

You’re just like the rest of them.


There are some similarities, but if you put them side by side, you'll note many differences.

https://imgur.com/a/L9GDD0m


I mean... They're both black guys with facial hair and short haircuts. Beyond that, not really... Face shape is completely different, eyes and nose are very obviously different. I don't see how the two could be confused.


Yeah but all of us in this thread already know that they're different people, so we're primed to come to that conclusion. However with no prior info, short glances and casual checks, they absolutely can be considered to "look the same".

Anywho this is a prime example of why you need humans that aren't involved in a situation to be in the loop to augment the facial-recognition algorithms. Enable them to view clean, high-res, zoom-able versions of the source images and then have them make the final call. Heck, if you're worried about bias, assign the task to a panel of people from various ethnicities, or maybe just to people of the same ethnicity as the involved person.


I disagree with your premise that they "look the same" even at a casual check, but even if we assume that's true, we're talking about 10 days here. I think the "casual check" excuse expires around the 10 minute mark.


Agreed - I think their plausible excuse expires after they get look of the person and I guess whatever "documents" or images they were checking them against the facial-recognition matched them.

Btw, I'm not saying they look the same. I'm saying at a casual glance or quick look, or grainy-surveillance footage level, they do look strikingly similar.

Another factor though we may not be considering in general regarding facial-recognition is identity theft. I've dealt with that sort of thing before, and if you're not careful the government might get really confused as to who the "actual" person is, and who was the one that stole the identity is. Again, not the case here, but it is something that police need to consider.


Would you like training on this matter?

I've learned that each phenotype has to be weighed differently than the people you grew up around the most. Completely different phenotypes and patterns often times.


The cheekbones are different. The lips are different. It's a crappy photo, but take a look.


They do look similar (enough to become a potential suspect, though I don't think should be enough for a warrant), and given that police departments don't "protect" their mugshots, I wouldn't be surprised if the picture was indeed reused (or photoshopped).

But I really wonder if the same officers that dealt with the suspect also questioned the innocent victim in this case - it seems like this wouldn't be the case (and gross negligence on their part) - those people should've been able to tell them apart (instead of solely relying on the picture they got from the license). I think this is what the judge wanted instead of just the facial recognition "evidence" (which eventually led to the charges being dropped) - something like, oh I don't know, the fingerprints from the abandoned vehicle? Basic investigative work, perhaps?


Eh, I wouldn't even say that. I'd say they look similar enough to be put into the same photo lineup, but I wouldn't even drag the guy from the article in to do a physical lineup, much less arrest him.


Agree that a photo alone shouldn’t be enough for an arrest warrant.

Like I said - either the officers that dealt with the suspect didn’t deal with/look at the victim, or else they’re blind (or trying to meet a quota).


I would hope it's unlikely these officers would suffer from a similar bias, but "they all look alike" is true in the limited sense that people who don't have much experience with particular ethnic groups do find individuals in those groups harder to distinguish than groups they do have experience with: https://www.psychologytoday.com/us/blog/life-in-the-intersec...


When is going to be a time when everybody understands that these algorithms are statistical and quite often wrong? It is always a surprise to me when I talk to somebody who does not understand neural nets or current state of ML algos and explain that it is just statistics and you cannot expect these systems to be 100% correct ever.


You might be surprised if you ever worked in B2B or B2C sales. It's not uncommon to apply a "spin" to effectively set unreasonable expectation for your client, whether they be an average joe or a law enforcement agency.

Not ethical, but also not exactly uncommon.


And most people don’t understand Bayes’ rule and the impact of false positives.


The problem comes in the expectations set based on other industries, where correctness actually matters both ethically and in terms of legal liability.

The corresponding XKCD comic: https://xkcd.com/2030/


In Shenzhen, starting in the last couple years if you jaywalk your face is automatically recognized and the fine is automatically deducted from your bank account. It's extremely accurate due to full access to corroborating location/transit/payment data. This happens within 20 seconds, and even includes expats. Your face was permanently indexed during the passport check.

Such an automated system at first glance may seem like it reduces on-the-ground human flaws and bias, but it actually enables far deeper corruption. Those with administrative control would be able to selectively highlight or concoct any sort of offense to undermine their personal or political opponents.

"Show me the man and I'll show you the crime" - Lavrentiy Beria, Stalin's Chief of Secret Police


FWIW I live close to Shenzhen and have a company there and have never heard of this system actually being applied except through media announcements. I am skeptical it is widely deployed as jaywalking is rife here in nearby cities and in fact believe it is probably largely a proactive media announcement from a solutions provider to local police who is claiming to have better capabilities than they really do. Of course it will happen eventually but in urban areas globally we already carry cellphones and drive cars with number plates, so privacy left the arena some time ago...

Maybe in the short term we will see a push-back of all-weather snow-busting bicycle-riding mesh networkers in Berlin or something, but I think the global trajectory is clear. What we really need is an alternative to commercial cellular.


At 5:30 in this Bloomberg video there's a discussion between expats, some talking about how they were hit by the new system.

https://www.youtube.com/watch?v=ydPqKhgh9Mg


I'm not doubting a system is deployed and can function sometimes, I'm doubting it's pervasive and generally effective even in Shenzhen. NB. Nanshan district where the guy mentions being done for jaywalking is the headquarters of Tencent.


True - such surveillance is still costly, and not yet omnipresent. Similar to speed cameras at certain intersections in the US. The hope is that it remains just that, an occasional feature that inspires deterrence, and does not escalate into omniscience.


I don't think it's costly. HikVision is one town away in Dongguan, and their foreign market just evaporated due to US sanctions. Cameras here are dirt cheap and many of them come paired with ICs that provide 'AI' as standard now. The only thing different in this case is overall resolution to cover an entire intersection (probably done with multiple cameras) and the face feature extraction code. Example Huawei product https://detail.tmall.com/item.htm?spm=id=618652543962 1080p SD card logging and 'AI' (probably event detection/trigger zones) USD$25 ordered at a volume of one.


I can confirm that I was in Zhongshan last year and saw a very big screen in the middle of a crosswalk island, it "named and shamed" jaywalkers, with pictures of the jaywalking and the face on their Chinese ID as well, although eyes are blurred out. Chinese name had one or two characters replaced with an X. Scary stuff.


Hot take: Within 10 years, I think there's going to be similar tech with personal cars, where you'll get automatically fined whenever you drive too fast, drive without seat belt, etc.

Then this information will be shared with insurance providers, and your car or health/life insurance will be automatically adjusted accordingly.

And, because why not, the dispute process is probably be some Kafkaesque ordeal.


If exiting injustices are any indication I predict the following:

In democracies that will likely result in speed limits and traffic laws that reflect how people actually drive or reduced penalties or reduced fines to the point where a large enough fraction of the population can pay to play. And then half of HN will hand wring about how we have an 85mph speed limit on the interstate and won't somebody think of the children.

There will be a few rich suburbs (the kind of places where the electorate clutches their pearls at the idea of anyone breaking any law even though they all text and drive in their land rovers and pay their baby-sitters under the table) that figure out they can just do it to outsiders only and create a revenue stream. They'll have a good 5-10yr run in the time it takes the courts to smack them down.


My country, unfortunately, already has this, though it's not fully automated. Cameras will take pictures whenever they detect someone breaking traffic lights. The license plate is read off, and a ticket is mailed to the car owners's house.

I think someone still looks at the pictures right now for 5 seconds before sending it off, but that won't last for very long.


They already exist for insurance. Many ads claim to reduce your insurance bill by proving you drive safely. They work by siphoning all the data off the OBD2 port and handing it to the insurance companies.


Teslas already have a user-facing camera. They are using them to make sure that if there's an accident, they can pin the blame on the driver (perhaps not looking straight forward, etc), and not their own technology.


Does this work with sunglasses and facemasks?


Facial recognition can work with ordinary facemasks. I don't have a link handy, but I'm sure Google can produce one, given the timeliness of the question.

As to your question, I also wonder about such things as:

* Makeup: https://www.vogue.com/article/anti-surveillance-makeup-cv-da...

* Hair styles: https://cvdazzle.com/

* Clothes: https://www.businessinsider.com/clothes-accessories-that-out...

* Infrared LEDs: https://www.schneier.com/blog/archives/2018/03/fooling_face_...

* Image manipulation software: https://www.theverge.com/2020/8/4/21353810/facial-recognitio...

And other things.


Facial recognition can work with ordinary facemasks

In theory, in practice it's unlikely to be very reliable outside of controlled situations like passport checks. https://apex.aero/articles/facial-recognition-tech-works-mas...

inb4 'gait recognition is foolproof'.


The first several links all say they make a big difference. Face masks are already known to stop the spread of coronavirus. Apparently, they can also make it much harder for facial-recognition software to identify you, too. (cnn)

It’s true accurate identification in ideal conditions is still possible, but catching random jaywalkers is hardly ideal conditions.


Right, but the newest of the first 4 links are still 6 months old. Six months is an eternity in the world of technology, ya know?

How much is 6 months converted into units of "new JS frameworks," BTW? ;)


> Six months is an eternity in the world of technology, ya know?

That's not even true in research, let alone production-ready deployment at scale.

Just because there might be a paper out that pushes SOTA by a few percentage points, doesn't mean there's a halfway usable product behind it as well.


Facial recognition is one of those "the little things add up" type deals. The tech is still not on par with humans and humans can't reliably recognize each other (unless it's like your sibling or something) without context clues when it's someone they aren't expecting and the person is wearing a mask or has a beard and the lighting is crappy.


Are the creators of these algorithms penalized for their false positives? I think the confidence levels returned by these algorithms would be quickly calibrated if there was some strict penalty for high-confidence false positives.


I would prefer that there is a strict penalty for people who rely too much on their tool that they can't even do their jobs correctly. I'd fire a carpenter if an AI told the carpenter to fuck up my cabinet, because I expect my carpenter to double-check. Similarly if the carpenter finds the AI consistently useless and an avenue to get their ass sued, then they'll decline to purchase shit AI.


I agree. The penalty on the user of AI would in turn incentivize the user to penalize/reward the AI developer. But I’m not sure if it’s reasonable to require every user to be able to assess the long run accuracy of a statistical system given their limited interaction. Maybe there’s a market for companies to reliably assess and certify AI systems or by creating a sort of a prediction market that penalize overconfident false positives.


Why should they be?

This system is saying, "of the 10 million people we have in our database, this one looks like the photo you gave us."

It is the police and prosecutor who are asserting, "this person did it and should be held in jail."

This tool is a visual equivalent of looking up someone's name and is the visual equivalent of the no-fly-list being used as though no two people share the same name.


That might be what it actually does, but if they are marketing their product as "facial recognition", then that is not what they are claiming it does.


What happens if this went to trial (let's say there was enough other evidence)?

How do you question your accuser?

Do you have access to the software? How much access?

Do many people even have the resources to examine the software?


"The software, which was created by Clearview Al, was criticized for its heavy reliance on billions of social media photos to identify criminal suspects."

So how exactly did they get access to all photos? It must have been trained on public networks like instagram, not facebook, right?!? If it was FB that would be interesting.


https://www.theverge.com/2020/2/6/21126063/facebook-clearvie...

> "Facebook and LinkedIn are latest to demand Clearview stop scraping images for facial recognition tech. Twitter and YouTube have also objected."

https://slate.com/technology/2020/02/youtube-linkedin-and-ot...

> "Facebook notably has not sent a formal cease-and-desist letter but claims to have sent other letters to Clearview to request more detail on its practices and then eventually “demanded” that it stop scraping user data. Peter Thiel, a venture capitalist and notable surveillance enthusiast who sits on Facebook’s board of directors, invested $200,000 in Clearview’s first round of funding."


>...notified Woodbridge police they had a “high profile” match to the photo,...

That's weird. You would think a facial recognition system would return hundreds of matches in a case where it was searching through a whole lot of images. What does "high profile" mean in this case?


has any human reviewed the match before throwing the person in jail? I hope they did and in that case it is clearly a human error.

Of course, no AI system (facial recognition or not) should be a sole decision maker for even detaining anyone.


What judge signed the arrest warrant, and why are they still a judge?


Is there a non-paywalled version of this anywhere, or a way of finding it. Really, it's tiresome to see constant posting of content that most others can't even read.



It may be a jurisdictional thing or a lack of ad blocking? I didn't see a paywall here in the U.S. with a generic ad blocker on my phone, FWIW.


> Outside, the man jumped into a black Dodge Challenger as officers drew their firearms and told him to stop

Not sure how this was justified for shoplifting.


My dad was a biker. He lead a clean and sober biker club, but you can't really tell the difference between that and a gang unless you know the symbols ("colors") on the jackets.

When I was maybe 11, we were in eastern Washington, I was on the back of the bike, and we got pulled over for failing to signal when pulling out of a gas station. Which I don't think is a violation? And also we did signal. But in any case, the cop approached the bike with gun drawn and pointed at us.

It's not about the suspected violation. These are cops willing to aggressively threaten lethal force when they are experiencing fear/anxiety/uncertainty based on what the suspect looks like, be it their skin color or their clothing.


> These are cops willing to aggressively threaten lethal force when they are experiencing fear/anxiety/uncertainty based on what the suspect looks like, be it their skin color or their clothing.

There are also a fair amount of bully cowboys out there hiding behind the badge.

I was a teenage passenger in a car when a friend of mine held up his wallet in the light of the back window to see if he had his bank card or w/e. Cop car was behind us in traffic, pulls us over, and two cops get out guns drawn instructing everyone to get face down on the pavement.

Turns out they thought they saw a knife/gun when my friend had held us wallet up in the light, and they figured we were brandishing a weapon at them. We all got searched and when they found half of us Boy Scouts were carrying pocket knives things got really stupid really fast. We had to beg them to be rational with us, nose down on the pavement. Still makes my blood boil.

This is anecdotal in rural small-town Iowa.


It’s no wonder there’s so much call to “defund the police” in the US with te number of anecdotes that come out

It feels to me that the only solution is to completely shut it down


Similarly exchange student classmates in college committed a road violation, either speeding or moving violation, I don’t recall. In any event they didn’t know the protocol for pulling over in their rented car.

The CHP called for backup forced them over and drew their weapons on them.

Same for friends at a University. High, and in possession they tried ditching the cops; bad move. Cops caught them drew weapons extremely infuriated at them.


Police across the US are trained to default to the threat of violence at all times, with the expectation that if they don't someone will immediately try to kill them.

A couple of examples:

https://www.theatlantic.com/national/archive/2014/12/police-...

https://www.startribune.com/fear-based-training-for-police-o...

https://www.minnpost.com/community-voices/2020/06/warrior-or...


Just shoplifting is not the full story, although I agree it's still overkill.

> the suspect shoplifted candy and other snacks from the hotel gift shop and darted into a men’s room after a hotel staffer called police.

> When two officers arrived, the man handed over a driver’s license from Tennessee identifying him as Jamal Owens.

> But police ran the name through the computer in the patrol car and determined the license was not valid and possibly fraudulent, according to the report.

> When the police pulled out their handcuffs to arrest the man, he fled out the back door of the hotel, losing a sneaker on the way.

The police outside has no idea about the threat-level it seems, for them it's just someone running from the law, possibly dangerous person.

That the situation even escalates to this point for some shoplifting and invalid license though, is quite telling of the public's feeling and fear for the police.


When the police pulled out their handcuffs to arrest the man, he fled out the back door of the hotel, losing a sneaker on the way. Outside, the man jumped into a black Dodge Challenger as officers drew their firearms and told him to stop, but he took off, causing an officer to jump out of the way, the report states. He crashed into the back of a patrol car and a column outside the hotel before speeding out onto Route 9, an officer wrote.

Not sure how you characterize that as "shoplifting". They had already questioned him about the shoplifting, without pulling their guns. It was the escape, driving at the officers, and ramming that got the guns pulled.


The account you quoted describes the police pulling out their guns before he got into the car at all. Evading arrest is not a capital offense and they don't mention that he had a weapon or that they believe he intended to harm anyone, so it's hard to see how pulling their weapons is justified.

It doesn't make sense to use the fact that he hit some stuff while fleeing from the men with guns to justify drawing the guns. In fact, provoking panicked flight seems like a pretty foreseeable outcome of drawing a gun on someone. If I had a gun drawn on me I might very well flee in a panic, potentially hitting cars with mine. They drew the guns before he did any of that and it's reasonable to think them drawing them made those outcomes more likely.

Edit: Here are the NJ deadly use of force standards [1]. My reading of the standards would not allow the use of deadly force in this situation. The rules also state that officers should only unholster their weapons when "circumstances create a reasonable belief that display of a firearm as an element of constructive authority helps establish or maintain control in a potentially dangerous situation." It's hard to see what was "potentially dangerous" in this situation before the guns were drawn.

[1] https://www.nj.gov/oag/dcj/agguide/useofforce2001.pdf


Correct. Right now the law supports the police rationalizing any outcome from a quantum catalogue of options, if the choice they take is within that catalogue it is considered "justified".

People then use the mere word justified to determine if it matches their predisposition. Most of that comes from an appeal to authority, if an organization they respect (the district attorney, mayor, etc) gives them a confirmation that it was justified, then they feel comfortable in their world view. Despite not really knowing that the legal term justified is not universal or predictable, and just comes down to an officer choosing from a catalogue of available options, instead of being judged on which option they chose like a civilian would.


I think the story is missing some context on this (which I tried to point out earlier, but got downvoted for my troubles) - it does seem excessive if the suspect hadn't run yet, but could be justified if he was driving towards them with the vehicle after failing to follow the officers' instructions.


I agree that it would make more sense if they drew their guns in response to the suspect driving at them. I also think that, if that had happened, they would have emphasized that in their account of what happened.


Agreed - I think the story's recounting of the events might be missing some (a lot?) of details - looks like the author had only the reports to go on, since no one else wanted to provide comments for the story.


Definitely not justified in my country.


"Software problem, nothing we could do."

Every time someone fucks up using technology and noone is held accountable, that's what we hear.


So if you are wondering why Clearview is so bad specifically for black people; it might help to know that the company has extensive ties to Right-wing white supremacists. And that many of those relatioinships were formed in Peter Thiel's orbit.

https://artificialintelligence-news.com/2020/04/08/clearview...


How would that help?


Could you clarify your question?

As I read it, it is essentially "I don't see how a close association with white supremacists/racists is relevant to the fact that it mis-identifies black people as criminals", which I doubt is what you mean.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: