Every point against facial recognition on the list is a point about how facial recognition systems materially relate to the individual or about particular technical faults of the system.
All of which miss the point. The point of surveillance is the same as Bentham's original panopticon, that is to say discipline people by making them discipline themselves.
surveillance isn't scary because of the literal cameras, it's scary because it makes people aware that they're being watched and thus it forces them to police themselves, internally.
People would be better off to recognize and argue this very fundamental point about the psychological intent of surveillance, rather than having obscure discussions about the legality of consent or whether the system is 97% or 98% accurate or whatever.
> The point of surveillance is the same as Bentham's original panopticon, that is to say discipline people by making them discipline themselves.
If you want your opinion to change things, I recommend not going into the intend.
I work in the public sector of Denmark, and we’ve increased our surveillance as much as everyone else. Often it happens after someone commits a crime. When a citizen assaulted on of our desk clerks, surveillance was stepped up to increase employee comfort. At no point during any of the pro-con discussions did anyone intentionally discuss or express any intent in terms of wanting to make our visiting citizens self-regulate. In fact the very opposite happened as it was brought up as a major concern, that we would deal with by hiding a of the additional cameras from view.
I’ve been around the top decision makers for long enough to know exactly what would happen if they happened to read something along your thoughts. They would easily dismiss it, because they in fact had the exact opposite intend.
If you want to make them listen, you need to focus much more on the result, which is exactly as you outline it. People start to self-regulate, and the negative consequences of this psychological response to being watched. Because that is the only way you’ll get your message out without people dismissing you as a conspiracy theorist or fear monger.
> At no point during any of the pro-con discussions did anyone intentionally discuss or express any intent in terms of wanting to make our visiting citizens self-regulate.
Are you sure about that? I’m sure nobody said it out loud, but I have a hard time believing these people rather wanted to endure another assault just as long as the perpetrator was caught on camera. It seems more logical that the intention was to prevent another assault.
> In fact the very opposite happened as it was brought up as a major concern, that we would deal with by hiding a of the additional cameras from view.
Which is a core principle of the panopticon. If you normalise the practice of hidden surveillance cameras people will have no choice but to self-regulate all the time, not just when they can see the actual camera.
> Are you sure about that? I’m sure nobody said it out loud, but I have a hard time believing these people rather wanted to endure another assault just as long as the perpetrator was caught on camera. It seems more logical that the intention was to prevent another assault.
You have to consider how these decisions are made in a large political organisation.
In my specific example the people making the decision never actually spoke to the employees on the floor, they just assumed more surveillance would signify that they were reactive and took the issue of safety serious. Along with cameras they improved alert procedures so that now all our janitors are summoned when the alarm is pressed. (This may sound odd in countries where citizens have guns, but it’s not that weird here).
I can’t claim that no one wants to impact citizen behaviour through things like surveillance, but I’ve never experienced any sort of evil motivation in any top level decisions even when the results turned out to be evil.
Most people genuinely aren’t evil, especially not people who have been successful enough to reach positions of actual power. They make bad decisions, but they tend to do so while they are trying to do good. Sometimes money is involved making the area a little more grey, but you’re really not going to find a lot of Hollywood CIA bosses gone rogue or super villains in the real world.
If you want to make people listen, it’s important to understand that. If it’s easier for you to still think of these people as evil, then there is an old punk quote that goes: “the guilty don’t feel guilty, they learn not to.” and I don’t say this jokingly. If you don’t take the fact that the pro-surveillance people actually mean well, even if they are very wrong, you’re just never going to get through to them, but what is worse, you’re not going to get through to anyone else who isn’t already on your side either. Because your points will be dismissed the moment you claim that someone is installing cameras to intentionally alter psychological behaviour.
You have to keep in mind that decision makers work together, and while they may appear to be very much opposed to one-another in public, they’ll also be spending long hours working together to come up with compromises behind doors. So even the anti-surveillance decision makers know that that pro-surveillance decisions makers mean well, because they’ve spend hours upon hours discussing it in private.
I hear you, but I don’t think even Jeremy Bentham himself was evil. I’m not sure surveillance is inherently evil either. Just that it may not always be as good of a deal for society as it seems.
But the material reality is still that the mechanism of action of a surveillance camera is to induce self policing. Without it it becomes worthless whether your intentions for putting it up are good or not.
Or maybe they were shopping for force field projectors and ordered cameras by mistake.
I really appreciate the sentiment you're conveying here, and you put it well. A quick browse of your profile suggests this isn't an anomaly in your posts, either.
Err, having people practice some self-discipline and personal responsibility would be great. Nice selling point of surveillance heh.
Imo, it is scary because the state always knows where you are and what you do. Which is fine most of the time, but if they decide you owe them a favour or need to disappear, that's it, can't hide.
> Err, having people practice some self-discipline and personal responsibility would be great. Nice selling point of surveillance heh.
It might be if the only self-regulation centered around harmful criminal acts. Who doesn't want people to think twice before stabbing or stealing? Unfortunately what it actually means is self-regulation of any and all activity that someone in power might disapprove of.
That might mean something totally innocent like showing affection for someone who is the "wrong" sex, race, or caste. It might mean being overhead speaking out against the people in power, or against whatever gods they worship. And just because you might not need to worry about those things landing you in trouble today, who can say who will be in power tomorrow or what they will hold against you? The record is permanent and can be mined for evidence against anyone at any time.
Who wants to live their life feeling like everything and anything they do in public (and increasingly in private) could be used against them now or in the future? Because you can never know what actions will be an offense, the smartest thing to do is to be as totally non-offensive as humanly possible. To never do or say anything unless you absolutely have to. To never speak your mind.
That is the chilling effect of constant surveillance and while few people will actually live their entire lives by that extreme (few at least until increasing examples of punishment are publicly known), in the back of everyone's mind they'll be forced to consider and weigh the risks simply living their lives freely will bring them.
Your argument is probably the most important and fundamental argument against surveillance there is. The reason it will be ignored is; For a million years our defense against other humans have largely been tribal, what i call the zebra defense, "I'm just like everyone else [in my tribe]", implying that if you pick on me you must also pick on my 'tribe'. This has evolved socially into statements like "I've nothing to hide". And its so deeply ingrained into our nature that we will accept mass surveillance believing that it will only be used against 'enemies' of our 'tribe', criminals, immigrants, etc.
I've found the only compelling argument is to point out that enemies will invariably use the system against us; e.g. you have nothing to hide? You want to hide your children from predators, you location from burglars, your identity from identity thieves, your blood type from gangsters with kidney problems. In short you have everything to hide from someone and a lot of someones are watching you through the surveillance system.
I agree with this but it also misses the very material threats of surveillance. That is, citizen is actively monitored for behavior and besides the philosophical self policing aspect, they can face actual consequences. Surveillance is scary for both philosophical chilling effect as well as material consequences. Those who surveil can make up arbitrary laws to violate persons’ right to liberty and pursuit of happiness.
I feel the sociological effects of internal regulation, but there is a long way to go before alternative behaviors are normalized, especially in the US.
A panopticon is fine if all the prisoners are unified, not ideologically, but in terms of acceptance.
Ultimately, it boils down to a perceived notion of "us vs them" which is entirely contrived.
I can only hope that individuals might resolve their philosophical disputes along a long enough timeline that the primitive notion of individuality permeates the collective and that personal fears of "the other" are dissolved into a unified something... I don't have the words for it.
Biggest monkey is boss monkey. Biggest monkey with most monkey friends is next boss monkey. Monkey friends only harass monkeys with no friends and suck up to the boss monkeys and their friends. It's good to be boss monkey. All the other monkeys have to regulate themselves or get ripped apart.
They can be stable for a few years at a time if there's lots of food, water, and decent weather, no injuries or major stressors. So yeah, boss monkeys don't stay that way forever, and it requires lots of work maintaining health and social relationships. The physical challenges are a function of pure physicality, there's no monkey jiu jitsu, it's all rage flailing - the fights are probably a lot less work overall then the socializing.
Had my creepiest-ever encounter with face recognition recently.
I was traveling home from our first international trip since The Before Times. The country we traveled from has a US Customs office in the airport. We walk up to the counter, look into the camera, and without having handed over my passport, the agent says my name.
I know that I've given them my photo and that the search space for my match isn't huge (people on flights leaving in the next ~2-8 hours), but it absolutely freaked me out. I can't imagine it meaningfully makes us more secure, and it feels like the kind of thing that, to this article's point, could be trivially abused.
I visited China with someone who was from China but had been living in the US. It was a hot day and at one point I wanted cold water (which turns out is a very western thing and hard to get elsewhere) and the only option was a vending machine.
I expected at first to pay with coins, WeChat, or AliPay but she just showed her face to the camera on the vending machine and it charged it to her bank account. She didn't have to enter a PIN or show identity and the only device she had on her was a US-bought iPhone with no cell service.
I was blown away that not only was a camera able to identify her out of the probably hundreds of millions of people enrolled in the system, but it was confident enough to actually charge money to her account.
I still think about that to this day and tell people about it when they talk about the level of tech in the US.
Actually, this sounds very much like the initial concept of credit cards (Diner's Club at 1950) - where the "mechanism of payment" is just that, you get identified and they send you the bill afterwards. All the extra complexity of current credit cards such as balance verification and authorization features are just tacked-on workarounds for various risks, but the original core idea (at least as I see it) was essentially about simply identifying which customer should be billed.
I'd say you got pranked, but that may be wishful thinking.
Yeah, I don't think that system is "confident enough". Just that the error rate is acceptable. Some poor sod gets charged instead of you? Welp, tough luck. Pretty terrible.
Why wouldn't they? Their faces are probably many years already in government face recognition systems. Also they don't have much of a choice anyway with CCP.
At least they get some extra convenience because of it.
The answer should be obvious. There are unintential false-positives and unintentianal false-negatives. But, the real issues with this form of matching are intentional false-positives and what happens to the data.
Well, that would be the issue in many parts of the world, but the reporting refers to China, which has additional issues. In case you are not aware China is run by a dictatorial authoritarian regime that has expansive powers and has does not treat people humanely.
There has to be a ton of ways to fake someone's face. Holding a photo up to the camera might be a bit obvious to anyone around you, but a bit of latex or even a mask might not be.
But then again, if the surveillance level is already so high, you are likely to get cought after this.
(That is, if you stole money from someone important enough, so the police cares, but they probably would care about the general integrity of the system)
I agree, but I prefer it when they're at least upfront about it. It's much better than just silently collecting your data and then using it against it you in ways you'll never know or be able to trace back even if it's suspected. If my health insurance premiums rise suddenly, it'd take a whistleblower for me to discover that it was because my face was flagged more often at fast-food counters or bars.
The contents of the chip is encrypted with the machine-readable text on the picture page, so you cannot harvest people's data like that without opening the passport.
But nothing is stopping the US customs service (or any other service) to store the unencrypted data for later reference right?
I don’t find it farfetched that some big players would have a database that holds basically a huge lookup table that goes from encrypted data to the unencrypted data.
This wouldn’t work on your first visits, but return visitors could have their rfid read like that.
Or do the rfid chips hold some kind of randomness in them so they can do a type of handshake (that is unique every time) where that machine readable password is needed?
One thing to consider (when arguing against this stuff as you definitely should) is that there are two distinct classes of arguments against;
One is "What if it works wrong?," and yeah, as a black person seeing the current state of the tech, that's already a pretty scary one.
But probably the more important is "What if it works perfectly well and exactly as intended?" It's still an outrageously bad idea. Now, I'm actually not sure if you'll be able to ban it outright, but this is where you have to slog through the extremely important but perhaps slow-going route of "public policy."
The big joke is that any of these arguments matter. As if we live inside an enlightened society where citizens are engaged and informed on every issue and their preferences then bubble up to lawmakers. lmao.
90% of people don't give a shit at all. The airports are already running face recognition. PRISM continues, seemingly without opposition. Every 4 years we get to choose between puppet X and puppet Y who promise different variations of "more jobs" or "America good" or some equally vague idea. There's no debating your way out of apathy or idiocracy..
It's weird have to conclude that none of the biomining having been done in the last decade has provoked clear and binary opinions on its legality. You can mine features on pictures you don't even own and those people you don't even know (or asked for consent). It's entirely legal.
Metadata is taken in masses to surveil us all.
But there's no one calling for an outright ban or regulation of such things. Or let's say it's far from mainstream and only a tiny minority that does. The usual arguments are always balanced as in: "yeah but if it was forbidden, we couldn't do X anymore."
On the other side, for some tech that isn't unconsentual exploitive like crypto currencies people just simply feel like shutting it down. Here, though it arguably has use cases too, nobody cares. If it's a security in the US, it'll get shut down no questions asked.
Also people’s faces can change, if they get disfigured, age, gain/lose weight, or possibly even exposed to the elements. Even subtle changes, AI models could be worse at recognizing than humans.
Surveillance economy is the new business model. Billions of dollars are invested in surveillance tech.
Military technology as usual creeps towards consumers.
Politicians are serving the corporations.
So we already lost this by accepting that technology advancements and economy benefits are more important than human privacy and overall dignity.
Two things come to mind. The first is some old humorous TV commercial where they had barcodes on your forehead and one didn't want to scan properly and the bank clerk or cashier was running this person's face repeatedly over the scanner.
The second is a historical incident that led to old "biometrics" being replaced with fingerprints as our default for identification. I've read about it previously and found this as the top result when doing a quick search (I will not vouch for its quality -- it is just evidence I am not making it up, plus enough info that you can go look for more if you find it interesting and want to know more).
How Look-Alike Leavenworth Prisoners Led To The Forensic Use Of Fingerprinting
The early part of the article makes the distinction between "unobjectionable" uses of facial recognition -- such as for secure facilities -- and facial recognition in public places generally. And it seems to me we are leaving out some important things when we discuss such things.
We are not talking about how our historical social norms and laws were rooted in a social reality that no longer exists. The world has changed radically in a short period of time and there are both upsides and downsides.
Historically, people lived in relatively small groups of close-knit people. Tribes. Small towns. Etc.
We mostly interacted with people we knew fairly well. This both provided some baseline security and also could be a prison from which you could not readily escape. Once labeled a "troublemaker" you would have a hard time living it down. People could easily frame you as the guilty party based on social expectation that it was typical behavior for you to do X.
It was hard to leave your place of origin and go elsewhere but if you could, you could potentially start over. Butch Cassidy and the Sundance Kid might have successfully started over had they genuinely left their life of crime behind entirely after moving to South America. But they relapsed and it did not end well.
We have statutes of limitations on various crimes because we have this idea historically that if enough time passes, you should not be held responsible for stupid mistakes you made in your youth. People can change, if they are given the chance to change.
It's complicated because it is human nature that if you make it too easy to get a pass for bad behavior, then you actively encourage bad behavior. But if you make it impossible to redeem yourself and start anew, then you give people no reason to bother to even try.
One of the problems with trends like facial recognition is that we are veering increasingly towards a very unforgiving world where every little thing you do will haunt you forever and there will be records even if you might have forgotten the incident entirely.
Yes, this is motivated in part by the fact that really terrible people like to look for the cracks in the system. They like to actively exploit loopholes. They will happily take the deal that they can get a free do-over without having to prove themselves only to keep doing terrible things because nothing is really stopping them.
But hard cases make bad laws. Designing a world optimized to treat everyone like they are this worst case scenario causes a great many problems while solving relatively few.
In a world with 8 billion people (roughly) and international passports required to go anywhere, etc. we are increasingly being painted into a corner individually and this will soon start to really come back to bite us, if it hasn't already. If nothing else, it makes it harder to migrate as one practical response to climate change making some areas less hospitable.
We need to begin thinking more deeply and more broadly about the social context in which our laws and expectations developed, how the world has changed and how to find a path forward in this new reality that isn't overly paranoid, overly controlling, etc.
> We have statutes of limitations on various crimes because we have this idea historically that if enough time passes, you should not be held responsible for stupid mistakes you made in your youth. People can change, if they are given the chance to change.
That's not the main reason for statutes of limitation.
A statute of limitations is a law that forbids prosecutors from charging someone with a crime that was committed more than a specified number of years ago. The main purpose of these laws is to ensure that convictions are based upon evidence (physical or eyewitness) that has not deteriorated with time. [1]
I'm sure there are many complicated reasons for such statutes. But not all crimes are subject to a statute of limitations.
Very serious crimes, like murder, typically have no statute of limitations. So I think to some degree it is reasonable to infer that an underlying impetus is to say "It's not worth pursuing this past a certain point."
I see that as twofold. One part letting people know they don't need to live their entire life in terror of an old parking ticket and one part husbanding limited societal resources.
The practice of declaring a person an outlaw who was no longer protected by the law was a means to say "You are so terrible, you don't deserve legal protection. Yet our system simply doesn't have the means to deal with you. We don't have the resources to track you down and put you to trial."
Outlaws could be killed out of hand by anyone and the killer was not going to be charged with murder.
We seem to have forgotten about how tenuous human survival was for so long. We seem to think the threat of extinction due to climate change is some radical new thing that we should be terrified by.
The reality is humans mostly lived on the brink. Starvation was a routine thing. Entire peoples being exterminated was commonplace.
There just wasn't the resources to track every little thing you did and try to take you to court over it decades later. We still only do that for extremely serious crimes.
We let the small stuff go at some point.
But that's only a "pass" if you did it a long time ago and stopped at some point. If you are still doing those things, your recent crimes can still be prosecuted and they sometimes will make an effort to show you are a repeat offender with a long history and put you away for a long time for that.
So sometimes they kind of do an end run around statutes of limitations if they think you don't really deserve a pass.
Like any human endeavor, it involves a judgement call and context matters.
I believe the train for this has already departed. It is very very unlikely that in next two decade state of surveillance, from either of its many pillars, slow down or stop. Lots of factors contributed to surveillance becoming as widespread as it is.
Since 2001, and through the other numerous times that this this argument(s?) has been posted on HN (thank you @dang for the links) - how have things changed for the better?
Its time to stop thinking that posting "arguments against" something is somehow equivalent to making a difference, and start making a difference for real.
Go the technical route and develop countermeasures to facial recognition? And then play the cat mouse game of counter-counter measures, and making counter-counter-counter measures to those counter-counter measures until the facial recognition is as good as humans? Then what?
Or do we go the political route and try to affect legislation? What I’ve learned from reducing climate change is that politicians only listen to you if you represent a group of people. Building coalition’s are a must for any kind of political change. We do build them, for tech people, on HN by posting articles against facial recognition. The thing we’re missing here is a way to organize collective action. But I think YC would want to avoid the kind of attention that would generate, so I think the site is designed against incentivizing that.
Mostly the latter, I think. Also just the "public perception" space (which these days, you'd think would be a pretty easy sell, given vaccine skepticism and all, but maybe not?")
You've eloquently buried in there a number of problems. But I would like to make one change to what you just mentioned: it's not that politicians only listen if you represent a group of people. Politicians listen to power and money, first and foremost (and not always in that order). Everything else is secondary.
That you're on a website that is designed against incentivizing collective action, and yet tries to pass itself off as a bastion of, well, let's call it practical and realistic rhetoric most of the time, is part of the problem. You're not going to find collective action anywhere because the moment you do, "they" are going to insert themselves into the collective to try to sabotage and break apart. "They" have decades worth of experience at it and don't mind killing people left and right, either.
The actionable advice, for this community, would be to put all that great intelligence together and try to come up with a way to subvert whatever opposition would stand in the way of a collective action. I don't know if there's money in it so sadly it probably couldn't be a YC backed start up. Quite the contrary - the money would most likely try to kill you, but nevermind that.
If we're going to outlaw automated facial recognition, which in general is better at recognizing faces than humans, we should release every person who is in prison because an eyewitness ID'd them.
I was a witness to a crime that led to a short police chase and a police shooting, and I couldn't accurately describe which direction the car was facing when it passed me (reversing or forward).
Up until that point I'd have thought I would be a reliable witness (I didn't, but this proved it definitively!)
I actually have a very similar story. When I was younger, I was witness to a pretty bad car accident, and I gave them my information, and was later subpoenaed as a witness. Before going, I thought I had a pretty good recollection of the event -- it happened right in front of me. When I got there, I was also stumped by simple questions about which direction one of the cars was going.
It was really eye-opening for me. Thinking about it is still kind of eerie. I remember the accident. I can see it in my head. But apparently the memory is not exactly correct.
You were bamboozled by a clever attorney or made to look like an idiot by an incompetent one.
I served on a jury where an attractive, sharp defense attorney just ripped a witness to shreds. The person testifying was a customer engineer who installed and repaired a particular piece of equipment, whose entire function was dependent on time. He was testifying about what he did every day, not some nuance about direction of travel.
She built the poor guy up and then threw him off a cliff. At the end he ended up testifying that time is impossible to accurately measure. The issue was pretty obvious - the incompetent DAs didn’t prepare the witness for cross examination.
In my case, it was just one single simple question about pointing out the direction the car was coming from and going to on a diagram of the intersection.
There was nothing clever about the question. Maybe they had a few up their sleeve for subsequent questions, but we never got that far, as I was immediately dismissed. I spent maybe 60 seconds in the courtroom.
It really was a case of casual observer not having useful information. Just because I saw and heard the impact doesn’t mean that noticed any of the details that would have made a difference to anyone.
> I was also stumped by simple questions about which direction one of the cars was going.
Presumably lawyers are very good at finding exactly the right questions to ask to exploit the weaknesses of human memory and make a witness look unreliable.
He's probably referring to the fact that AI is better than humans at telling if two photos are of the same person. But I agree that doesn't translate to being better at recognising faces in general.
People might still be better at recognising faces they have learnt, and they might be worse in terms of percentage correctness, but have less bad failure modes.
Your comment exemplifies my point. You have a hidden bias, which is why you used the word "obvious".
The metric that is used is extremely important. If the metric is "identify lots of faces" then AI is superior. If "identity faces correctly" is the metric, then it is far from certain or obvious that an AI is superior to a human.
All of which miss the point. The point of surveillance is the same as Bentham's original panopticon, that is to say discipline people by making them discipline themselves.
surveillance isn't scary because of the literal cameras, it's scary because it makes people aware that they're being watched and thus it forces them to police themselves, internally.
People would be better off to recognize and argue this very fundamental point about the psychological intent of surveillance, rather than having obscure discussions about the legality of consent or whether the system is 97% or 98% accurate or whatever.