And I'm tired of people imposing arbitrary limitations on technology in the name of "privacy". It's become another one of these ideas that you just can't question without becoming a bad person in some circles. Why? Why can't we ask whether "privacy" is so precious that we should preserve it at any cost?
Much of the supposed loss of privacy comes from technological development normalizing the exceptional. Now everyone can have an unblinking ever-watching doorbell. So what? You can't roll back progress. What you can do is embrace new technology, accept its downsides (almost always over-played), and figure out how to use new capabilities to make life better. (For example, we can lower crime.) These people beating the "privacy" drum don't acknowledge that there are real regulatory and opportunity costs coming from the measures needed to preserve this "privacy" and that ordinary people might prefer not paying these costs if they were fully aware of the trade-offs.
What specific and concrete unjustified harm has ever come to someone as a result of public facial recognition?
>> What specific and concrete unjustified harm has ever come to someone as a result of public facial recognition?
The harm is to peoples' expectation of privacy in a public place, which is non-zero. For example, if I'm walking around town talking to my partner, I don't expect random strangers to keep pace with us and eavesdrop on our conversation, because, even if it's in a public place, it's a private conversation.
In the same way, I don't expect private or public organisations to watch my every move and keep track of where I go, what I do and whom I speak to, every single day, which is possible with facial recognition technology and is, by its indiscriminate nature unjustified.
You 're probably not happy with this reply because you have framed the question as "concrete and unjustified harm", which strongly implies either financial as physical harm. But there is concrete and significant harm that can come to persons that is not financial or physical and that is nonetheless recognised by law. For example, Article 4 of the European Human Rights Convention makes slavery and servitude illegal without any precondition that violence is exerted, or the person is otherwise harmed physically or in any other way than being in servitude, i.e. the harm is recognised as being caused by the condition of servitude itself. And you can rest assured that nobody takes slavery and servitude lightly at least in most European countries who are signatories to the treaty.
> You 're probably not happy with this reply because you have framed the question as "concrete and unjustified harm", which strongly implies either financial as physical harm. But there is concrete and significant harm that can come to persons that is not financial or physical and that is nonetheless recognised by law. For example, Article 4 of the European Human Rights Convention makes slavery and servitude illegal without any precondition that violence is exerted, or the person is otherwise harmed physically or in any other way than being in servitude, i.e. the harm is recognised as being caused by the condition of servitude itself.
How does one become enslaved without the threat of physical or financial harm?
And, unfortunately, the threat seems to work pretty well, even if "juju magic" itself of course doesn't.
Additionally, the threat of physical or financial harm is not itself physical or financial harm.
So that's a kind of harm other than physical or financial harm. My assumption was that the "specific and concrete" condition you placed on "harm" in your original comment was meant to mean physical or financial harm.
I have to ask- was I wrong to make this assumption? I apologise if so, but could you clarify what you meant by "specific and concrete" harm?
I disagree with you, but I still don't like seeing you downvoted.
Actually, I kind-of agree with you. The problem isn't data (i.e. lack of "privacy"). The problem is abuse of this data - by governments (foreign and you own) (e.g. being locked up because you're a spouse of a journalist, or being refused at the border for saying the wrong thing), corporations (FB banning you for supporting the wrong party, insurance premiums rising because you're speeding), even your neighbors (perverts jerking off to videos of your kids)! I hope that someday in the future, we can have a world where all (well, most of) your data is used in your best interest. Unfortunately, the current legislation is, to a large degree, made with the idea that it's not going to be 100% enforceable anyways - so many laws have to change to make that happen... from child porn & consent laws (children shouldn't have the rest of their lives ruined for having sex, or messaging naughty photos of themselves), speeding laws (different countries have different max speed laws, so clearly at least some of them are wrong and/or arbitrary), insurance laws ("preexisting conditions" need to be covered, smoking can't be singled out as the only dangerous behavior), free speech laws (you shouldn't be punished even for the most offensive jokes), ... just some example from the top of my head.
A bit off topic, but the argument is 100% correct. We just need to figure out the actual causal factors, all else is bad statistics, leading to bad policy.
> In 2007, the Small Arms Survey found that Switzerland had the third-highest ratio of civilian firearms per 100 residents (46), outdone by only the US (89) and Yemen (55).
> In 2016, the country had 47 homicides with firearms. The country's overall murder rate is near zero.
(Not saying of course that whatever they do in Switzerland is applicable to any other country, in any kind of short- to medium-term. The best short-term solution for the US might as well be banning/restricting gun ownership.)
> Why? Why can't we ask whether "privacy" is so precious that we should preserve it at any cost?
I would suggest opening up one of the many books that deal with the history of various police states of central, eastern and southern Europe of the last 100 years or so. This continent is scarred to the bone by authoritarianism, millions of nameless graves and all that, if that's not a good enough reason I don't know what to say more. Ignorance of the past is deadly.
> What specific and concrete unjustified harm has ever come to someone as a result of public facial recognition?
And what specific real benefit? If they are going to implement facial recognition, then I would like to see real (not just numbers of) prevented crimes, terrorist attacks, caught criminals, etc. I want to see what my liberties were given for.
If there is 100% facial recognition in every public centimeter square of Europe, then there should be not a single "bad guy" on the loose.
PS: why downvoting parent? Even if I don't agree at 100%, in my opinion it's a very valid PoV and not formulated in a trolling/flaming way.
When it doesn't work correctly [0]. But it's easy to think of other scary scenarios, too. Having worked in the public sector (in the UK) I know how useless they are at securing data.
Are you crazy? The persecution of Muslims in china is heavily reliant on surveillance technology. As this technology rolls out, it will make surveillance cheaper, and as a result, will make the job of authoritarian regimes easier.
Saying there aren't good, documented examples of 'concrete', 'specific' harm is akin to saying that hydrogen bombs are not dangerous because they've never been dropped on people.
I'm equally tired of the argument that it's OK to invade people's privacy because technological "advancement" is more desirable.
But the issue is really more fundamental than that -- why should others be able force me to live a worse sort of life just so they can have cool gadgets?
And I'm tired of people imposing arbitrary limitations on technology in the name of "privacy". It's become another one of these ideas that you just can't question without becoming a bad person in some circles. Why? Why can't we ask whether "privacy" is so precious that we should preserve it at any cost?
Much of the supposed loss of privacy comes from technological development normalizing the exceptional. Now everyone can have an unblinking ever-watching doorbell. So what? You can't roll back progress. What you can do is embrace new technology, accept its downsides (almost always over-played), and figure out how to use new capabilities to make life better. (For example, we can lower crime.) These people beating the "privacy" drum don't acknowledge that there are real regulatory and opportunity costs coming from the measures needed to preserve this "privacy" and that ordinary people might prefer not paying these costs if they were fully aware of the trade-offs.
What specific and concrete unjustified harm has ever come to someone as a result of public facial recognition?