Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Europe's CSAM-scanning plan is tipping point for democratic rights, experts warn (techcrunch.com)
316 points by rntn on Oct 24, 2023 | hide | past | favorite | 176 comments


> “The very fact that people are viewing these images — which is exactly the intent of the Commission to try and overcome — is a form of abuse itself. So if we now take innocent pictures, which have been shared among consenting individuals, and expose them to potentially hundreds of other individuals down the investigation chain then we are indeed actually creating more victims as a result of this.”

This is the crux of the issue. Not only will this plan have a bunch of negative side-effects, it will actually hurt the very people it's meant to protect.


Okay, you got a point... but also, think about the Terrorists! They could be exchanging Terrorist Secret Messages right under our noses!

Seriously, whenever anyone justifies their actions with "Think of the children" or "Think about the terrorists" single-minded emotional appeals, I can't see it as anything but cheap manipulation.

It should be required for proponents to back up their safety claims with hard numbers. They ask to give up a lot without without accountability, and I don't think the damage outweighs the benefit.

For example, the Patriot act didn't result in a wave of terrorism crackdown, au contraire, it facilitated massive human rights abuse, but the rights it took have not been given back. Same with the massive increase in airport security, I don't know how many terrorists they have caught carrying nail clippers, but it seems like less intrusive methods would deter most terrorist attempts.


Much worse even, and given the recent and ongoing events, several EU governments have shown zero respect for freedom of speech and a tendency to accuse large parts of their population of "supporting terrorism". In the UK, police regularly harasses citizens for expressing their voice on social media and elsewhere on sensitive politic matters. And I am supposed to welcome these governments reading into my private messages in search for possible crimes? No way.


A former co-worker of mine used to work at a company where his job (he was a fresh college grad, so got the shit work) was to go through tons of text messages, social media posts, etc., and mark them as spam or not-spam. He... didn't last very long. It was just soul-sucking work.

A level worse is the people who have to trawl through social media posts looking for the gruesome types of ToS violations: images of extreme violence and gore, cruelty, etc. A lot of people in those kinds of jobs need copious amounts of therapy to deal with it (but unfortunately many seem to not have access to therapy).

Having to look at suspected CSAM all day to give a ok/not-ok determination on it? God, I can't imagine what that'd do to a person.


> Having to look at suspected CSAM all day to give a ok/not-ok determination on it? God, I can't imagine what that'd do to a person.

Especially if you didn't sign up for it. Like, someone in law enforcement probably has to do this, but I'd hope they signed up for it, received training for it and have a support network. But imagine if you're just a social media moderator and suddenly that's your job.


CSAM is just a smokescreen. The monitoring is the goal, and if/when they force this thru it will rapidly expand to cover anything and everything


Also the members of the council pushed for website censorship, which will be included in this bill. They also directly lie about the contents. The EU allegedly has fact checkers, but they are a joke as otherwise they would flag publications of the commission left and right.

To be honest, I don't see the EU as a public good anymore. For a while already. It has degenerated to into a bureaucratic technocracy.

Before anyone tries to mention that I can vote for parliament and one of the council members and everything is super dandy,... I know and that doesn't make anything any better. Democracy is so indirect that we can do away with it completely in the current implementation. We will get bombed with such bills until something sticks. This isn't anything I will support.


But similar initiatives will appear everywhere in the end once they land somewhere (outside openly totalitarian regimes where they are already in place, like China). They have to be stopped, but I am afraid they will happen in the end, everywhere; once they happen in some western countries, others will have the blueprint to copy to theirs. And citizens everywhere are being lied to; most non tech (but also tech) people think this is a good thing. Friends in the US, UK and AU are applauding this type of stuff coming to their countries. Because of the children.

Unfortunately no one takes it serious enough to take massive ad and tv campaigns to member states and educate the public that this is simply bad for everyone and doesn’t help the children.


I think, but mostly hope, that this client side scanning initiative will fall apart. Not just because it's an obvious destruction of civil liberties for us in the EU, but also because it's doomed to fail even if implemented. The underlying concept is ineffective at dealing with CSAM, or any kind of adversarial user scenario. Unfortunately my hopes have a bad legislative record.

Clients aren't trustworthy. Ignoring the myriad services that exist, and will come to exist, which just never implement this -- there will be (somewhat) easily accessible versions of apps that remove such telemetry. E.g. VScode is to VScodium as WhatsApp is to WhatsAppium.

Imagine the spoofing opportunities. Message someone, get their client ID, report that client detected all the worst CSAM imaginable.

Think of the false positives. Think of the expense. Think of the children. Think of the children whose non-abusive parents send banal but confounding photos to each other, get hounded by CSAM allegations, and some fraction of whom get separated from their child.

Passing this will be a huge expense of time all of these legislators could use better. Implementing these directives will be a huge expense of money and knowhow to build a flawed and exploitable system. Enforcing the consequences will have untold amounts of suffering unrelated to CSAM. And likely after the first few creeps are caught, the adversarial actors will just start using other channels.

But by that point the conversation will have shifted to clientside scanning of thoughts...


> "Think of the children whose non-abusive parents send banal but confounding photos to each other, get hounded by CSAM allegations, and some fraction of whom get separated from their child."

How do you see a false positive progressing into child separation?

People who are actually convicted of possessing CSAM don't necessarily lose access to their children at the moment.


Ever since I started writing on this issue, I’ve had a number of folks contact me and say they were unwittingly flagged by Google Photos’ ML-based CSAM detection algorithm. The most recent of these told me the photo was sent as a (pretty awful) prank in a WhatsApp group, and their phone auto-saved it to the camera roll. All of these folks lost all access to Google services with minimal options for appeal, and have had their lives disrupted in various ways I can’t imagine.

I have no way of verifying what these people are telling me, and I can’t help them even if I did. And even more than this: my first reaction is to wonder if they’re all actually pedophiles who are just lying to me, and/or if they’re going to text CSAM at me. (Because that’s the power of this accusation — nobody will trust you once they hear it.)

Some of these folks have had their lives ruined and some have told me they were hospitalized and considered suicide, even though they weren’t actually prosecuted. I can certainly imagine being in their situation due to the mistake of having an asshole friend-of-a-friend in a group chat and some bad settings in my phone. (I have neither, thankfully.)

If none of this stuff convinces you, here is a much more carefully-documented version of a similar story in which a dad took a photo of their toddler’s rash and was subsequently investigated for CSAM. This one has the “happy” ending that the accused is entirely cleared by the police, but never gets his Google account back and has to live with whatever other trauma and shame comes from the investigation: https://www.nytimes.com/2022/08/21/technology/google-surveil...


It’s even worse than this because once the idea of mass CSAM-scanning is normalized, a false accusation is immediately credible (they would know, after all…)


That’s an utter nightmare of a situation I’d never considered, or prepared to defend against.

I hope the “prankster” ends up caught and in prison


> I can certainly imagine being in their situation due to the mistake of having an asshole friend-of-a-friend in a group chat and some bad settings in my phone. (I have neither, thankfully.)

This always makes me think how much worse middle school will get: kids are guaranteed to push boundaries, they all have some anger justified or not towards each other or the adults they know, and some fraction will learn how nastily they can weaponize the mandatory reporter system. Even if the truth comes out quickly, there’s so much room for strife first.


I understand how the system can lead to people losing access to their Google account, GP suggested people could lose access to their children. The NYT article is the opposite of what GP claimed - the case was closed before he even knew it had been opened.


Imagine you are in the middle of a custody battle when this news comes down, or that you are jailed before being cleared. Imagine the cops and social workers in your city aren't diligent, or that you already have a criminal record. And further imagine you don’t have the luxury of a New York Times reporter investigating your case and writing publicly about how you’re innocent. The NYT example is a best case outcome for an innocent person who triggers these algorithms.


I guess my point is I don't see how your "jailed before being cleared" example could ever reasonably happen. The NYT is a best case, but also an average case, and in fact any other outcome would be absurd.


I think your understanding of the common vs. uncommon cases here is out of whack. The common case is that parents who have done nothing wrong lose their children for days or weeks (at best) before things are resolved in their favor.

And then there's of course the (less-common, but happens often enough to be troubling) situation where parents lose their children and never get them back, even though they're innocent of causing any harm.

Note that I'm talking about the broader effect of Child Protective Services when they investigate any kind of reported abuse, not just CSAM possession/distribution.


The system isn't reasonable, so your denial based on what you think can "reasonably" happen is just your privilege talking.


That might be the case, but there would still be examples of it happening to other people, wouldn't there?


Re: unreasonable system, would you accept as evidence the 1115 children that were removed from their parents because the Dutch IRS labeled the parents as fraudsters (due to a biased and discriminatory AI system, of course)?

https://www.bnnvara.nl/artikelen/hoe-konden-1115-kinderen-va...


Indiscriminate surveillance combined with an intent assumption: the system is setup to find CSAM, so it doesn't test alternate hypotheses.

There's many intervening steps[1] of consequences which can result. Not to mention chilling effects: it's a standard bit of knowledge now that all my family members worry a bit if they take a photo of my son doing something and he's not wearing a shirt or something. Because no one's entirely sure the extent or impact of whatever scanning systems already exist (hey folks: encrypt everything on the cloud, and thank god for Signal).

[1] https://au.pcmag.com/security/95768/nyt-parents-lose-google-...


In practice, I imagine it would be in the minority of cases when law enforcement is given a report on someone including this detection "evidence," and for the duration of the investigation the child is placed in protective custody. Not necessarily permanently, however the experience of family separation even for a few days can be traumatic. Criminal investigations can often take more than a few days of course.

At the same time, there are unfortunately many counterexamples of abusive parents not being separated from their children via the same mechanisms. Sometimes this is from the abusers being "good criminals" and not getting caught; but sometimes it's from LEOs being bad LEOs and not doing the catching.

My point was that in a system which surveils ~500M people, even an FP rate of ~0.01% is 50k people. Supposing the LEOs tasked with filtering this into credible (and triable) cases further manage to, through amazing police work, confirm the innocence of 99% of those false positives. That's still 500 falsely accused, but now confirmed suspicious, suspects.


I had assumed the report sent to law enforcement would include the actual image sent (the suspected CSAM). This is what happens at the moment with NCMEC. The comments from Alexander Hanff seem to suggest this as well.


"known and unknown CSAM"

Given where image AI is, this is certain to detect a colossal amount of false positives. In fact it'll be impossible to sift out false positives soon, if it isn't already. This plan seems like fighting the last war when there's a much bigger and different problem looming.


"Unknown CSAM" just means scanning all your files with AI to judge content. They can't be searching for infringing material that they haven't seen before, they'd have to train a model to interpret images.

Which means lots of people are going to end up flagged for pictures of their own kids, because just about every parent has nude photos of their children somewhere.


Which means the state has a ready made reason to detain anyone they don't like on the grounds of "child abuse". That sounds like the point of this to me.


Also you can't just have hundreds of thousands of judges, lawyers and jurors looking at CSAM all day, can you?

Better to have a panel of 10 or 20 carefully selected experts (of course with secret identities to avoid them being threatened by major csam cartels) who will inform all involved parties (including yourself) at how much csam you looked and how bad it was.


If everyone is guilty of breaking the law, the true power lies in selective enforcement of these laws.


I think all of the arguments here talking about why these sort of measures don't work to reduce CSAM (or have too many false negatives or too many false positives) are missing the point:

While legislators do care about reducing child abuse, they're mainly using it as justification for this stuff so they can increase surveillance and control over their citizens. This is just a play to mandate client-side software that acts against the interests of the hardware owner. Once it's on there, it's a much smaller step to increase the scope of what it's required to do (either legally or through shady backchannel means) to things that the average person wouldn't be in favor of.


Don't know why you are downvoted, but that is very true. They also try to implement a unique internet ID per citizen to implement age verification. The motives are pretty transparent here. Some say they don't want that, but they are very likely lying.


Client-side scanning can't and won't work because:

   1. Users can scramble the files to pass scans.
   2. Users can use the same client-side scanners as black boxes to verify.
There is no scenario in which client-side scanning works unless the user is not allowed to modify anything on the system. Not the images, not the software.

Mark my words, the only thing this will be good at is identifying honest law-abiding citizens with political causes that are not aligned with their current government. If this passes, real democracy, the sort that ends unpopular government-endorsed laws like race discrimination and prohibition, will die in the EU.


The European Commission is on a power trip.


>The European Commission is on a power trip.

Never waste a good crisis. It's been well documented that crisis events are the best opportunities of governments to expressly push out anti-democratic and anti-citizen laws that lead to erosion of freedom and wealth transfers from poor to rich, as those types of events provide the perfect excuse to bypass the usual and very long and through democratic processes where everyone takes their sweet time to dissect, analyze and debate them. And we recently had plenty of crisis: Covid, inflation, waves of refugees, war in Ukraine, war in Israel.

"Don't think about it too much you poor taxpayer, you don't understand this issue anyway, it's too complex for you and plus, you're too busy being distracted and worrying about this virus/rampant inflation/CoL/war to have time to think about such petty issues like your privacy, so just sit back, relax, worry about paying your rent, and let us worry about protecting your online privacy, m'kay?"


> the best opportunities of governments to expressly push out anti-democratic and anti-citizen laws

Coming from my perspective, in a family who fought multiple wars to preserve liberal democratic governments based on social contract, this seems backward, and beyond my value system.

Anything that I would see as a legitimate "government" would be pro-democracy and fully in the service of its citizens.

If the cynical slant you suggest is true, then we don't actually have governments any longer, and you should not use that word. In that case, we ought to figure out how we lost real government, what we have instead, and how to get rid of it.


> If the cynical slant you suggest is true, then we don't actually have governments any longer, and you should not use that word. In that case, we ought to figure out how we lost real government, what we have instead, and how to get rid of it.

Under this perfect standard, you never had a "real government" in the first place. You had, at best, some approximation of it, and you gradually lost it to time.


> If the cynical slant you suggest is true, then we don't actually have governments any longer, and you should not use that word. In that case, we ought to figure out how we lost real government, what we have instead, and how to get rid of it.

- "we ought to figure out how we lost real government": It happened because of the inability to collectively overcome recency biases, which the State often exploits to their benefit; A tragedy happens, people want it fixed, the government gives a "fix" that grants them more power, and when its realized that too much was given away, it's by then too late to fully reverse the "fix".

- "what we have instead": What we have now is a State whose main focus is its own self-preservation, even when certain high-level agents/politicians have the goal of doing the opposite. Day-to-day-wise, the State is mainly focused on making sure that the State continues to exist, with the accomplishment of what it was meant to do in the first place being secondary.

- "how to get rid of it": Only two types of events can eliminate such a State. (1) An event so cataclysmic that not even a State (with total control over everyone & everything) can survive; (2) A ground-up total rejection of the State as a whole, even the good parts; (3) A rival State defeating the existing State & taking over.

The event that causes (1) implicitly also means the destruction of modern society & a regression to medieval ages.

(2) will be hard to drum up support for given the benefits from the good parts of the State, even if they're ephemeral.

The new State from (3) is just as likely to be as bad/good as the old state, just with different warts.

At its core, its a human hardware/software problem. The human hardware needs to be made more resilient against adversarial attacks & internal failures, so as to reduce the need for a State to begin with. Similarly, the human software also needs to be improved to mitigate cognitive biases as much as possible.


The word you're looking for is "tyranny"


State security apparatus is forever in a conflict with personal privacy rights. Everything that protects Paul Public also protects Chris Criminal/Tommy Terrorist(/Randy Revolutionary), it's an old issue that pushes continuously for weakening any form of privacy protections from legal (investigators love that so much of our lives now lives in 3rd party hands so they can access it easier) to technological (the continuous assault on encryption since it became feasible for the average person to use in communications).

As a side note this is also something that anyone who works in privacy preserving software will have to recon with at some point if they think seriously about their goals. If your tools are strong enough to protect journalists (or insert whatever oppressed group you prefer) in hostile regimes it's good enough to protect criminals doing things you personally abhor.


Criminals doing bad things it's the cost of having privacy. We never had and we will never have a perfect society.

Just look at the EC members: they hide their communications, use WhatsApp to negotiate contracts worth billions of dollars (see the Pfizer direct "negotiation" between von der Leyen and Pfizer) but we should give up encryption because criminals use it as well...

The hypocrisy is astounding.


Every government is on a power trip. It’s not specific to any one implementation.

The organizations that push things like this self-select for power hungry people who see the majority of people not as peers but as subjects to be ruled over.

It works the same everywhere. This is not a European thing.


The entire argument can be summed up like this:

It is impossible with current technology to completely eliminate any photos or videos from existence. So long as the Internet exists, and people have access banned media, that banned media will continue to proliferate. To combat this, the state needs to be granted extraordinary powers to peer into people's devices and accounts and look at their files so that we can prosecute people in possession of banned media, and the state super-duper promises that the only media we'll ever ban is child pornography, and nothing else ever, though you can't actually stop us when we push to expand this power later, which we will absolutely be doing.


This is an attempt to solve the right problem (CSA) but of course the wrong way.

It's like 30 years ago saying "people record such abuse onto VHS tape - if we talk to Sony and Scotch we can prevent this in some way". Well actually it is a little more realistic but still ... preventing Child Sexual Abuse should be a huge and high priority for all governments, all citizens.

If there was some sure fire guaranteed method we would do it tomorrow, no later the cost

We just don't really know how - it is some combination of mental health reform, massive educational reform, child social services reform so that the really at risk kids are treated but also extending that meaningfully to all children.

I would be interested in any clever ideas


>This is an attempt to solve the right problem

It really isn't.

When a new law or program is announced that is supposedly meant to fight terrorism, child abuse, or drugs - and when the actual scope of the law or program is more than terrorism, child abuse, or drugs, then:

You are being manipulated into voting against your interests. 100% of the time.


We used to know the four horsemen of censorship as Drugs, Terrorism, Child Porn, Money Laundering... You can get whatever you want, just say it's for one of those things.

Now... they barely bother and everything is just "for safety".


Require all children under the age of 18 to wear a smart watch with video and microphone recording that can be accessed by a child protection agency at any point. You could have on-device content recognition, so most of the data is discarded without being uploaded, unless that child is tagged for extra attention by authorities. ("We use on-device smart algorithms to ensure privacy. Your child's data will never be uploaded unless on-device AI detects an abuse situation. And who would want to prevent that sort of data upload beyond the abuser?")

Have it continually check for watch removal, and if it is removed, upload the last 30 seconds of video to identify who removed it.

If the child or parent removed it, issue a fine to the parent and a mandatory interview for the child with child protective services, to uncover the 'true' reason for the removal.

For recharging, each child could be issues with a pair of them; one is always on charge, the other is worn. If the currently worn watch detects the other watch is not being charged, it alerts the user. If that is not corrected in a certain period of time, an infraction will be generated.

Multiple removal / 'accidental battery depletion' infractions would result in successively higher fines, and potential removal of the child from the parents.

This is all doable with today's technology. You just need to manufacture consent with a couple of high-profile cases where a smart watch saved a child from an abusive parent, maybe a kidnapping attempt where the police managed to track the child and recover them with the "lesson" being that all children need this form of tracking.


> Require all children under the age of 18 to wear a smart watch with video and microphone recording that can be accessed by a child protection agency at any point. You could have on-device content recognition, so most of the data is discarded without being uploaded, unless that child is tagged for extra attention by authorities. ("We use on-device smart algorithms to ensure privacy.

I also watched that South Park episode.


[]


That is no better than jailing them kids in an open air prison


>This is an attempt to solve the right problem (CSA)

Is it though? Does punishing people for possessing CSAM actually have any effect on the rate of CSA?

I'm not saying criminalizing CSAM isn't the right choice, but is there any data at all to suggest that punishing the kinds of people that would be caught if we scanned their cloud storage for CSAM actually have a real effect on the number of children harmed?

Because if it doesn't, than it stands that the only thing this would accomplish is the feel-good act of putting pedophiles behind bars, but I'm not really worried about pedophiles, I am worried about child rapists and child pornographers that actually commit crimes against children.

I don't see how going after end-users is going to do anything at all about the supply chain. But I see a million ways that being allowed to go after end-users would give the state lots of power to search people for just about any crime they decided was important enough to warrant this kind of invasion of privacy. If they are allowed to do this, there is a 100% chance they'll expand beyond CSAM within a few years.


> Does punishing people for possessing CSAM actually have any effect on the rate of CSA?

I think this is a more fundamental question than it seems. In a few years- probably less than it takes to actually implement these laws- almost all CSAM could be AI generated. Then consuming it would be effectively a victimless crime- basically punishing someone for his sexual inclinations in the absence of any actual victim.

Unless of course it's been proven that consumption of CSAM material substantially increases the risk for CSA behaviour in the consumers.


This is an attempt to solve the right problem (CSA)

I fail to see how it does that. The abuse happens during the production of the material, not during the spread of it (yes, I know, spreading it is adding insult to injury -- I'm not defending any of it). And there doesn't even need to be "material" involved (recording is optional), yet I see no mention of tackling that problem.


That's my point - it's an attempt just not an (very) effective attempt.

I don't know what is effective - hence why a group of otherwise presumably intelligent and well intentioned lawmakers are throwing this hail mary pass.

The sarcastic suggestion of recording every move of every child is terrible - but honestly I can see something like it happening (presumably by parental consent).

I have often promoted an idea of "MOOP" - where we record ourselves in real time and over millions of people a open scientific view of human behaviour can be built up and optimised - from spending habits to personal interactions

Somewhere between utopia and dystopia


It's really disturbing that there are people out there who believe that this, in any way or form, is meant to be used to protect anyone.

It fucking isn't. You're being fooled and you're falling for it. Not only that, you think that monitoring everyone is a good idea, which is absolutely insane.

Even more insane is this idea of yours, where everything is recorded for scientific research, completely ignoring that this absolutely will be used to manipulate us even more than we already are being manipulated.

You are absolutely insane. Get a grip on actual reality again.


most abuses happen within the family or close by [1]. Scanning pictures will nothing or close to nothing to solve the issue. It's just a backdoor for further surveillance.

1. https://victimsofcrime.org/child-sexual-abuse-statistics/


This is an ignored uncomfortable fact.


There are no clever ideas and no easy fixes for this problem, which is that the vast bulk of child sexual abuse, like the vast bulk of all child abuse, happens at the hands of a trusted adult relative or close family friend, in a context that should be safe.


The 80s and 90s did an absolute number on people's ability to recognize societal dangers, because they were obsessed with not insulting anyone. "Your children don't want to do drugs, they're being pushed onto them by the bad drug dealers! (because of course we know you're a good parent, and your kids are good kids, and our moral worldview has no room for nuance or emotional understanding...)"

Same thing with child abuse. Hell, the infamous "razor blades/poison in candy" Halloween story[1] started when a father murdered his 8 year old by deliberately placing cyanide-laced pixy stix in his son's trick of treat pile.

[1] https://en.wikipedia.org/wiki/Poisoned_candy_myths


I slept on it and I still can't figure out what you're trying to get at here. You seem to gesture at some sort of causal relationship, but at what I've no idea, though I assume your claim that kids get abused because they go looking for it to be unintentional. In any case, would you care to clarify?


The point is that because no one wanted to address the ugly truth of these issues, they either fabricated or vastly overstated non typical dangers.

Kids do drugs because they want to experiment. Dealers don't give out free samples.

Candy doesn't get poisoned: the case which instigated the idea was a father trying to cover up murdering his child for life insurance money.

And strangers in vans rarely ever try to grab children off the street to abuse them: trusted adults in authority positions do, frequently family members. Or church leaders. If you have an instance of child abuse, it's going to turn out to be a family member more often than not.

The point is that the 80s/90s didn't want to acknowledge this reality, so it invented or amplified stereotyped villains that would be easy to recognise...except of course they all but don't exist.


Okay, sure, but I guess I don't get why that's especially significant versus other decades' ways of not looking at the problem, of which there seem to have been about as many as decades for quite some time now.


Because when people decided they were gong to Mobilize and do something about the problem...they spent a lot of time and effort essentially reinforcing the systems which allow abusers to operate, and would do little to prevent it.

So we wasted decades worrying about excessively "creeps in vans" rather then teaching children bodily autonomy concepts at an early age, and teaching adults to actually listen to what they're being told by children when they try to seek help.

And this wasn't harmless: the gay community, the BDSM community - in fact any outgroup which looked too "different" was - and still is! - the preferred target when your plan is "find a villain stereotype, and go after them". It is still happening, right now in regards to trans-rights.

Rather then just failing to prevent harm, it literally expands the harm to other groups (which is what this article here is all about as well: "only people with something to hide would worry about us invasively scanning all your data to stop the bad people").


Okay and again, granted, but that's hardly new, is it? In the limit case it can't be younger than the blood libel, which has been "going to and fro in the earth" for a dog's age. Outside some coincidental variation in targeting, there seems little enough to tar one example as categorically worse than the others, and I don't recall any overt pogroms from the 80s or 90s, though I think the "satanic ritual abuse" people may have got close to lynching someone a time or two.

It's not that I don't share your opinion of such practices, but mine comes of having been abused and then spent the following several decades studying the literature on the phenomenon to try to understand what happened to me and why it was allowed. The conclusion I've reached is that, even in societies where child sexual abuse is correctly understood as a problem - which is not even all of them - this is an understanding honored, in the vast majority of cases, entirely in the breach.

Oh, sure, you get the occasional high-profile case, usually around a highly atypical serial offender. But most of the time, you get what I got: made clearly to understand that, even if you do find in yourself the courage to tell someone you should be able to trust, you will not be believed and there will be no help coming - and you would be wise to speak no more on the topic, lest you make yourself inconvenient by creating a scandal in the family, for which you rather than those responsible will typically be blamed. And God help you if you should fall pregnant by a male relative! I at least didn't have that to worry about - a very cold sort of mercy, I grant you, but a mercy nonetheless.

And then too, suppose you tell a mandated reporter who actually fulfills that duty, and the state swoops in to the rescue. What then? I've known a couple of people who, I later came to learn from a mutual acquaintance, had spent considerable time in the foster system. They were notable in my experience for being among the few who've had even less to say about their upbringings than I typically do about mine. Which leaves me to ask myself: given the option, would I really have chosen that over the life I actually had? At least my grandfather wept in shame, after. Him I can forgive. Those who insisted what they did was God's work, by contrast, make it worth even an apostate's while to retain some faith in the existence of hell.

Given even those attempts that are made seem so often to net out more ill than good, I think this is not a problem our societies know how to address, and even if it were, that would be the work of decades and thus would still remain the incentive to go on doing as we do now, sacrificing the well-being and sometimes the lives of the most vulnerable among us - children and members of marginalized groups alike - on the altar of just...not wanting to deal with it, and secondarily for the sake of any moral panic's political utility.

This is what I mean when I say there are no easy answers. You're not wrong about the shape of the 80s and 90s in this regard, although I continue to suspect you may have yet to quite encompass the full breadth of the history here. But your analysis lacks of completion, in my view, where it assumes that things went wrong somehow in those decades specifically. They did not; they merely continued as normal, just as they have before and since. Only the choice of targets and some accidents of rhetoric varied, just as they always do. The issue of real interest here is not one example of moral panic over imagined forms of child abuse rather than honestly facing the fact of it; the issue is every example, and more fundamentally there being so many examples because this is what we always do. Because this is a problem we as a society choose simply not to try to solve.

I think I understand the reasons for that. Certainly I've put enough effort into it that I hope I do by now! But to understand is not to excuse, and there is no excuse for the sacrifice of children; it may be only one damnation among many, but a damnation it remains nonetheless.


>If there was some sure fire guaranteed method we would do it tomorrow, no later the cost

No goal is worth pursuing no matter the cost.


> It's like 30 years ago saying "people record such abuse onto VHS tape

30 years ago was before the Internet became commonly available, and there was not a lot of talk about CSAM. Hell, the first federal law against it was ... 1984?

Why did we not really care about it before then, but now it's a "no matter the cost" problem?


> 30 years ago was before the Internet became commonly available, and there was not a lot of talk about CSAM.

Cypherpunk FAQ dated 1994-09-10 had:

> 8.3.4. "How will privacy and anonymity be attacked?" [...]

> like so many other "computer hacker" items, as a tool for the "Four Horsemen": drug-dealers, money-launderers, terrorists, and pedophiles.

* https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...

The 'Four Horsemen' term was coined in 1988. So yes, it was being talked about thirty years ago.

Some of us witnessed the (first?) Crypto Wars of the 1990s:

* https://en.wikipedia.org/wiki/Crypto_Wars


I don't think there was ever time when people didn't think it was a problem, but the focus was more on going after the people abusing children rather then the people passing around recorded evidence of the crime.


If I remember correctly more focus was on the collectors inciting to abuse by paying for the videos or like creating a market for it.


> If there was some sure fire guaranteed method we would do it tomorrow, no later the cost.

That's obviously not true. Here is a simple solution: sterilize as much of humanity as possible. Preferable via an airborne mutagen or something like that. Boom no children to be abused anymore in the span of a few generations.


If you use a sufficiently potent airborne toxin, you'll achieve results even sooner.


Well I would argue that killing children is child abuse in itself.


Or we can use existing tech and solve the problem with global nuclear annihilation.


Or we could create a virus, spread it across the world, invent a new type of vaccine, tell everyone that it helps against the virus but actually slowly kills people leading to an excess death rate nobody officially talks about!

Less people, less births, less children, less child abuse!

............ yeah I had to write that one. It fit the chain too well.


IMO this is an attempt to take advantage of a preexisting problem in order to force mass surveillance legislation upon the populace


That is exactly what it is and that is why they're ignoring everyone that tells them why they shouldn't do it


Note that as it stands the current proposal won’t pass as there are multiple member states (NL, Spain, Poland) against it and it’s probably unlawful. It has to be thoroughly revised to pass; those changes probably will make it more useless than it already is, but hopefully saves us from privacy/encryption backdoors, this time.


They also have ambitions to block websites, basically common web censorship. The current president of the commission already tried that on a national level in Germany.

It cannot pass under any circumstance, it needs to be rejected as a whole. There is no compromise to be had.


Many petition to vote in favour of this proposal; cannot find one against. I did email our dutch minister with concern. Any other plans? It is scary how many (including tech people) want this to happen ‘for the children’.


There are immense lobbying attempt by large tech companies. Otherwise nobody in tech circle really supports this at all. On the contrary, the rest of the industry is more or less unanimously against this with a lot of experts deriding the ideas.

Also some civil rights activist that believe this proposal to be quite illegal.


Well most people who work in tech and who are technical that I know personally, are, to my surprise, in favour of it. I just asked them after reading comments here. But they, like many, did not inform themselves; they just ‘want more protection for children’.


Perhaps there is a regional difference, but I don't know a single developer that is in favor of it. I don't see how anyone with technical knowledge could come to a positive conclusion here if they look at error rates and problems to privacy.

There were also public hearings with technical experts about feasibility that were said to be a huge embarrassment for people proposing the bill as the rejection was so clear. I think from nine experts, zero accepted the proposal and made pretty clear, that this is to be rejected with prejudice.

The only ones in tech that I know of at least are those that want to sell their own AI solutions. As I said, there was quite a bit of lobbying.


They already do in Spain over "piracy" issues, with DPI. No way to watch a 'pirated' soccer match without a VPN.

On "piracy", if you can tune to foreign TV streams is not piracy.


If you’re watching content that you don’t have lawful access to (i.e.: the licensing for those streams very likely doesn’t include you), it’s absolutely piracy. It’s just really easy and unenforceable piracy.

That doesn’t mean it’s ethically wrong, but that’s another question entirely.


How the hell do euros spend 5 years proudly bragging about their GDPR to the rest of the world and then go ahead and even consider something like this? I assume this doesn't directly violate the GDPR but even so it seems like an obvious contradiction to the spirit and intent of GDPR.


I don’t understand what confuses you so much. This is for “the safety”. “Trusted governments” will make sure “exemptions” are in place and everything will be “okay”. Why worry, do you have something to hide?


Well, because I'm the euro who did the bragging about our GDPR, but won't even consider something like this. I don't know who is in the set of people who want this, let alone the set of people who want this while bragging about the GDPR.


This weakens your privacy with regards to sharp governmental powers, something the GDPR never strengthened or addressed to begin with due to its categorical exemption of law enforcement. Well, competent law enforcement (if you needed a laugh).

And to all the EU cheerleaders about to educate me how, thanks to the GDPR, my local municipality has to treat my email with care when I go ask for a building permit: go gaslight your grandma instead.


"Competent" in this usage means "empowered" not "proficient".


Can't believe I missed that, especially for how off it seemed in the latter form. Thank you.


No it’s not at all a common use of the word, stumped me too the first time I saw it used like that.


In a few years all CSAM will be AI generated; if they don't hurry up, they'll be left only with fighting terrorism as an excuse for draconian laws.


This is a blessing for them. Now they have an excuse to "regulate" AI. I doubt they'd try to ban local models entirely (even for them, that's ridiculous), but the accusation will offer a convenient way to harass sites offering them


The EU minus Denmark and Ireland who have opted out from the area of freedom, security and justice domain. I don't see how real democratic control over CSAM-scanning is possible in the EU. But national parliaments can vote for/against laws with their version of CSAM-scanning.A new parliament can changes those laws.


CSAM and Chat Control is some of the scariest ideas EU is pushing.

What shocks me the most is how little people react over this.


Thank god Wojciech Wiewiórowski is a voice of reason in the middle of all this. But it’s extremely concerning that it’s reaching all the way to our final thin lines in the EU. This shouldn’t even get further than the lobbying.


> supports mass surveillance and is willing to give up on democracy "against child abuse"

> supports whoever does war everywhere around the world, by selling weapons to one side, gas to the other side, or saying some side is just defending themselves and should continue to do whatever

We have two conclusions two this :

EU supports children*

*unless they come from Yemen, Gaza, Ukraine, or are somehow on the Mediterranean Sea (then they're on their own. If they die, that would do fewer migrants)

And EU is scanning that you have compliant views on the world events.

I can totally see that if you're not a journalist who documents war abuses, and if children were involved in these, you'll have evidence of "child abuse" and you're going to get reported.

Sorry to get political for once. But we have never ever done a single thing for children (EU was just a business club, nothing more):

They let known paedophiles continue to work in schools, they just change their jobs.

Some "mandatory" sexual education was defined at the European level. It was aimed to teach children about themselves, and how to recognize, refuse, and report abuse within their families. But it's often just skipped in schools because "budgets" and "planning".

And unless your local government (usually the city) pays for activities and clubs for children, they're just left to grow on their own : if they're in a city with drugs, they're usually raised by the cartels. EU won't set up activities.

But if it's about scanning everyone's phone, then they say they need to act for the children.


This is just further evidence that the GPRD is really just protectionist legislation for EU tech and media companies. The EU doesn’t really care about the privacy of its citizens


Looks like CSAM idiocy is just smoking gun for changing EU treacies.

And as usual it looks like Germans want to be more important then others...


Use Tox. Toxic for GNULinux/BSD terminal nerds, ATox for Android users, QTox/UTox for PC users with Windows/Linux/Mac/BSD and who knows.


Undermining of privacy like that sure seems dangerous in a me & my rights sense, but the ability to sway public opinion on a grand scale via twitter misinformation & cambridge analytica stunts etc seems much much more dangerous to the concept of anything "democratic" to me.

Who the f cares about encryption protocol details if you've lost control of the state & the primary democractic mechanism as a whole?


Traffic fatalities would fall nearly to 0 if we made all speed limits 25mph. Obviously, as it has been for decades, child safety is the cloak that tyranny is dressed in. Not to mention CP is a common thing to pin on inconvenient people in society. The EU has been doing a lot of rent seeking off Google & big tech. They hit them with fines all the time. If I actually believed they gave a fuck about people or privacy it wouldn't be as offensive to me, but they don't. They want big tech doing what they want, they want "protection money" or at least a slice of the pie, and they want more government control of citizenry. I always admired Europe as an American, but its clearly in the same (or worse) state of decay as the US.


> Traffic fatalities would fall nearly to 0 if we made all speed limits 25mph.

Wales is giving it a go:

https://www.bbc.com/news/uk-wales-62134399


Looks like they're just limiting speed in built-up areas. That's pretty common around the world, including in the US.


> I always admired Europe as an American, but its clearly in the same (or worse) state of decay as the US.

Don’t passively let these systems develop, you can lobby them too.

World construction isn’t just a gradient of people’s mores, it is catalyzed by a couple individuals just like the ones pushing this CSAM scanning

From my experience, everyone feels like they’re doing the most good thing when they are pulling the controls of the state, even though they are all subverting the popular vote with access and money, or vice versa.


Off topic but i just want to share that yesterday I was traveling, and in a residential area (a village) that nobody has stepped a foot in in 30 years, I was caught "speeding". I was going 75 in a 40kph zone.

I don't get why it's 40kms. The traffic sign wasn't even visible, although that's not an excuse that would or should work. It's not like anyone lives there.

Anywho, I paid 150 euros and now I have to go to court, where they will 99.9% take my drivers license away for 3 months. Fucking bullshit.

I study law in uni. I don't get why these archaic villages that nobody has stepped a foot in for a long time are still considered residential areas, limiting how fast you can legally go. I thought the whole point of the law was to serve society and it's interests, and not to fill the state budget by being caught "speeding" in an area where the speed limit shouldn't realistically be that low, but alas it doesn't seem so

Like, I live in a village too, but it's got a population of over 1k as opposed to 0.


> Traffic fatalities would fall nearly to 0 if we made all speed limits 25mph.

You're saying it as if it's a bad thing. Nah, it would be great (with some exceptions that can be addressed). It would encourage ubiquitous and efficient public transport.


Consider moving to Switzerland then. Not only they're discussing an universal 30km/h (18mph) limit in urban areas, but some cities are also tweaking traffic lights to maximise stopped time on top of that.


I live in a 30kmph zone in Switzerland and it works fine. I'm not sure what the problem is.

It's a really common speed limit in Europe/UK and is suitable for residential city streets. On Geoguesser I have seen similar US streets that are 40mph which looks completely unsafe.


And we have ubiquitous and efficient public transport.


Switzerland has 1/2 the population of New Jersey. Are the accomplishments in public transportation because the Swiss have solved some fundamental issues that no one else possibility could, a truly amazing feat that the world should look to? Or do certain things work better under certain conditions, and it's pretty much the exact same thing everyone does that doesn't scale?


> Are the accomplishments in public transportation because the Swiss have solved some fundamental issues that no one else possibility could, a truly amazing feat that the world should look to?

Loads of other countries also have great public transportation, especially when the current state of the USA is what’s being compared against. Possibly including the USA itself before cars got going.

> Or do certain things work better under certain conditions, and it's pretty much the exact same thing everything that probably doesn't scale?

The thing which public transport needs to be good at scaling is passengers per hour per land area.

In this regard, approximately all public transport — bus, light rail, heavy rail, tram, ferry, underground — scales better than cars.

This is why the Swiss built them, as all those steep hillsides and valleys already have houses in them, and tunnels (regardless of if they’re road or rail) are expensive.

Seriously, why does this trope of saying “oh America can’t possibly, it’s such a big country” even exist, when the USA also has a road network despite being big and those roads are themselves generally much wider than Europe’s roads? It’s not like public transport is some special magic category that’s different from all the civil infrastructure the USA already makes, the only difference seems to be that “public” is a dirty word.


The real answer is that the US doesn't want public transit. I visited a rustbelt town several years ago, and a new mall had been sited specifically so that it would not be on the bus network: being accessible to the people who ride the bus was considered a liability for the businesses that would rent space in the mall.

More recently I visited a much nicer midwestern town which is planning an expansion of its bus network amd optimizing traffic lights for better bus flow. There is a new mall on the bus line. The difference is that 95% of bus riders in this second town are upper middle class college students.


To flip the argument: ok-to-good public transport is an Europe-wide thing. US is what, 2/3 of the population? Why don’t we ask if US needs to grow for public transport to scale?

Ah yes, because that’s the wrong dimension in the first place. Population density is the one that matters.


At the same time our government wants to increase the cost of public transportation [1] with the age old argument that it is too expensive compared to the cost of individual transportation.

While I don't have the numbers to analyze the costs it just doesn't make sense to me that a car with a single driver can be more cost effective than a bus, tram or train I guess the automobile lobby has convinced loads of my colleagues that individual transportation is price-comparable to public transport, even in Switzerland...

[1] https://www.srf.ch/news/schweiz/hoehere-billettpreise-bund-s... (german)


> doesn't make sense to me that a car with a single driver can be more cost effective than a bus, tram or train

This is a fascinating area to research. Of course there are trade-offs to model including a few such as: price of person's time waiting for next bus, time to go all around the mulberry bush instead of point to point, and hassles and personal safety in public places.


> some cities are also tweaking traffic lights to maximise stopped time on top of that

Are you sure about that? It would greatly increase unnecessary emissions.


It would also make it take twice as long to drive from a ranch in Summer Lake, Oregon to the local grocery store in Lakeview to get groceries.

Some large swaths of the United States don't have public transit because of the expanse and also because of the use cases. People in the cities rely on people in these rural areas to grow food, so its not like you could wish everyone into the cities and solve the problem.


Lowering speed limit to 25 mph isn't a magic wand. People will speed what they feel safe to speed.

If we want to reduce pedestrian fatalities, there are a raft of social policies and urban planning decisions we can make that reduces fatalities to a potentially very low level if we decided that's a priority. We don't need to rigorously enforce the speed limit to make that happen.


One day I'd like to create a taxonomy of hacker news responses. This is a good example of the one where someone gets pedantic about the details of an analogy employed for rhetorical purposes, objects to those details, and in so doing completely misses the point.


That's not exclusive to hacker news in the least, I've seen it happen at about the same rate throughout communities regardless of how technology-centric they are. It seems that the inability (or unwillingness) to put away the reality of the analogy and only take it as a bridge towards the point the person is trying to make, is universal.


Make it illegal to own a car that does not have a governor which limits it to 25 mph. There are ways around that. Then the only people speeding are those with illegally modified cars. Then you just have to suggest the only reason someone would want to drive faster than 25 is to specifically run over children.


This sounds vaguely familiar. I've seen almost this exact chain of reasoning somewhere and I can't quite put my finger on it.


> Make it illegal to own a car that does not have a governor which limits it to 25 mph.

„Intelligent” speed assistant becomes mandatory for any new model homologated in the EU from next year.

But as we are are onto banning random things that piss us off: can we ban bicycles moving faster than 25mph and also being an idiot? Or even better, make it illegal to procreate with an idtiot. Like mosquitoes, idiots will surely die out if they cannot procreate. Now someone has to come up with a legalese for an idiot.


Will a CSAM scanner implementation eventually be repurposed to block politically sensitive material? Telegram is already modified by the Google Play store to block specific channels.


Of course. The UK RIPA (regulation of investigatory powers) Act was widely advertised as a measure to combat terrorism, it very quickly became regularised and is used by local councils to investigate cases of people illegally feeding pigeons.

https://www.theguardian.com/world/2016/dec/25/british-counci...


That's why we shouldn't call it scanning for CSAM. We should call it mandatory submission of all private communication to government inspection. Fighting CSAM is just the alleged, first, use-case.


Yep. Argument one is, this system will quickly be used mostly for other things.

Argument two: Even if you can stop 100% the transmission of CSAM materials, you are not stopping the abuse of children. Just the secondary abuse of distributing the materials.


It happened almost immediately in Australia with COVID tracking data[1].

Were I in a position of any type of authority, careers would be ending and jail time would be getting handed out like candy over this. I'm not in charge unfortunately and not enough citizens know how to react to these expansions of power until it impacts them personally.

But as soon as the capability exists, it really is a slippery slope: it's far easier to gradually expand the scope and reach by administrative changes, then it is to start the program up initially - and there's no track record of any government reasonably resisting this sort of insider threat.

[1] https://www.abc.net.au/news/2021-06-16/police-refused-to-sto...


Exactly right - and initially the expanded use of these systems is not visible. PRISM, for example, was found to be used frequently by NSA employees to look into the activities of romantic interests or exes. Nobody knew about it for years until an audit took place. We're told that the abuse of this system was unusual and handled appropriately, and no longer a problem. (Is it, really?)

The next phase is of course the court cases, where authorities seek legal access to the data which is collected. Usually the police win these cases to establish legal precedent. Still - your average citizen is not reading civil rights-related or tech-related news, and they have no idea this happens.

By the time your average person finds out what the newly collected data is being used for, it's far too late to do anything about it.

Then there's the political aspect, which we are not likely to ever hear about. The boss's boss's boss's boss needs a favor done, don't ask why or for whom. That data, go get it. Listen, be discreet, no email and no cell phone. This is the one which is the most scary; an authoritarian in power who wants to target the opposition. Depending on the country, that could be a list of people who disappears and is never seen again - or in the US, gets raided over dubious charges by the local police as a form of harassment.


Why do you think RIPA was widely advertised as a means to combat terrorism?

Here is the Home Office site from the time; https://web.archive.org/web/20000510065613/http://www.homeof...

Which includes an open letter the Home Secretary wrote in defence of the bill; https://web.archive.org/web/20000601233317/http://www.homeof...

Here are the Explanatory Notes from the act itself; https://www.legislation.gov.uk/ukpga/2000/23/notes#:~:text=S...

None of them mentions terrorism.

What RIPA did was made it illegal for local councils to do the kind of busy body, nosy neighbour surveillance they were already doing - unless they obtained an authority to do so. It also provided the means of redress for people wrongfully subjected to such surveillance.

In other words it introduced proper regulation of intrusive investigative techniques.


Of course it will. They couldn't care less about children, they're just using them as a polical weapon to make people accept it. I have absolutely no doubt that the system will be immediately expanded to search for and punish wrongthink of all sorts.


> Will a CSAM scanner implementation eventually be repurposed to block politically sensitive material?

The likes of Google already have content scanners automatically censoring some messages - namely, blocking e-mails about cheap viagra.

It would be very easy to add some extra keywords for politically sensitive topics - but we've been blocking spam for 25 years and I can't recall any reports of it happening.


It'd take a whistleblower for us to know about it. Companies have been repeatedly caught filtering politically sensitive topics inappropriately (yahoo comes to mind https://archive.thinkprogress.org/yahoo-appears-to-be-censor...) but it's usually explained as an "error" and I suspect that in many cases it genuinely is, it's just impossible for us to tell the difference between "caught them red handed censoring" and "caught them red handed being bad at their job"


If you can't find any instances of politically motivated censorship in Google results you have been willfully blind.


I didn't ask about search results - I asked about e-mail spam detection performing politically motivated censorship.


>Will a CSAM scanner implementation eventually be repurposed to block politically sensitive material?

Of course it will. This is a certainty.


If you use Android use FDroid, much better.


Why the question mark? Replace it with a period. It is cleaner.


Because it is a rhetorical question and it would be grammatically incorrect to use a period.


Perhaps we need a way to show that a question is rhetorical? /r


Jeez man. Seems like a news story about the EU being on a power trip hits the front page every week now. Thanking my lucky stars I don’t have to live under such a controlling regime.


I'm not sure if the US is much better, or if we're just a lot quieter about the three letter agencies spying on our communications. In some ways the transparency is kind of refreshing. At least the EU has something to fight back against while we can only guess at the extent of the abuses that are happening right now.


This is far from what a "controlling regime" looks like.


> Thanking my lucky stars I don’t have to live under such a controlling regime.

I live there and it is descending into sheer horror. And all the various member state pass things into national laws then people complain: "we have no choice, it's the EU" as an excuse to make your life miserable.

It's not just about CSAM. The intrusiveness of KYC/AML for example is out of control too. So much that the EU is now apparently trying to correct their trajectory a bit: too many businesses and individuals are complaining that of all the people required to play the compliance/KYC/AML snitches for the states (notary / lawyers / bankers / insurers / accountants / etc.), a bit too many feel empowered by some ego-trip and believe they're suddenly part of the spanish inquisition bent on burning heretics.

Take just any subject: say cryptocurrencies. I know, I know: many here hate on cryptocurrencies, but bear with me... What does the EU do regarding cryptocurrencies? Quickly pass a directive mandating every VASP (Virtual Asset Service Provider) to report to the kommandantur the name/physical address/virtual address of any EU citizen owning cryptocurrencies. It doesn't only concerns exchanges/brokers like Coinbase: it also concerns companies selling hardware wallets. You may hate on cryptocurrencies: this is not about protecting the people. It's about total, complete and utter surveillance.

Because they can.

They can pass any directive they want and they do so.

At the moment the nation states in the EU still have their own fiscal rules but I'm sure this is coming to an end too.

The EU is very quickly turning into the EURSS. It's horrible to witness.

I'm ready to get the fuck out (I already moved to a less totalitarian country, sadly still inside the EU), but it's hard.


> The intrusiveness of KYC/AML for example is out of control too

Mostly because pressure from the US. Which is lame af but the US is pressuring sovereign countries to crack down on these things; it’s no different outside the eu (well, there are exceptions, but I doubt you would want to live there talking about totalitarian regimes). Even HSBC, the traditional aml culprit, adheres to this crap even in their home territory under USA pressure.

> I'm ready to get the fuck out

Please do, see how much you enjoy it. I have lived in every continent and the grass looks greener, but it’s not. If you don’t try and just sit here being annoyed, you’ll never know though.

In the meanwhile, vote for the people that are in favour of democracy and privacy.


Your police forces regularly murder citizens exercising their legal and constitutionally protected rights[1,2], and experience no consequences for it as well as get to hound the families for free long after the media loses interest.

The EU has a public debate on issues, and consistently rejects them - which is the price of freedom, eternal vigilance.

[1] https://www.abc.net.au/news/2023-04-15/us-police-shoot-man-d...

[2] https://www.washingtonpost.com/graphics/investigations/polic...


Current day EU is trying to be like one of those comfy Norwegian prisons but with more guards and surveillance.


UK (Camden Lock) or US? UK currently is pretty bad for many ‘controlling’ metrics.

https://www.comparitech.com/vpn-privacy/the-worlds-most-surv...


You mean, like the NSA, Five Eyes, the Assange issue, the anti-encryption exportion laws in the 90's (Hello NetBSD vs the safe OpenBSD from Canada)?

No one it's good here. It's like the war between Russia's Wagner vs Ukranian's Azov. A war between thugs.


It makes stuff like GDPR ironic too.


I hadn't even considered this, but you're absolutely correct. That's hilarious, in the most depressing way possible!


The eu now has pretend democracy so those who pass eu laws are not accountable to the people. I know people disagree with facts about eu voting, so let us know when typical eu person gets to vote on chief Von der Leyen retaining office.


Almost all European nations currently believe that it's perfectly okay for 14-16 year old children to have sex with adults. I'd argue if we are so concerned about children's safety we should first change the law so pedos are not legally allowed to groom and have sex with children in the EU.

The most hilarious thing about all of this is CSAM and online safety stuff is that Europe is totally backwards when it comes to child sexual safety. You might recall that recently in the UK Hollywood celebrity and sex-addict Russel Brand would frequently call taxis to pick up his 16 year old fan girls from school so he could have sex with them. This was perfectly legal because pedophilia is legal in most of Europe..


This is more subtle than you make it seem. Consensual sex between a minor and an adult is okay, abuse is not, and all European nations you’re talking about have a lot of legislation around that fact.

For example, a 16 year old girl might have sex with her boyfriend of 19 without having to fear legal repercussions, but a teacher or supervisor will face charges for any kind of sexual or emotional overtures towards their underage students or staff.

Generally, I would call this a lot more pragmatic than assuming everyone is just grown up at some magic date.


This. My English lexicon in 'legalese' isn't good enough to explain that in proper terms.


14-16 year old "children" are usually well into puberty and women are fertile. It is perfectly normal to let them consent in sexual relations. And I don't see what's wrong for a 16 year old "child" to have sex with a 18 year old "adult".

And you have to realize that 14-16 is the age of consent. It means that younger than that, a child simply cannot legally consent, any sex act is rape, no matter how well informed and how much the child wants it.

But it doesn't mean anything after that is a free pass, you still need actual consent. And manipulating someone into having sex is not consent, no matter the age, but it is especially true for minors who are easy targets. There may also be laws where for having sex with a minor also require consent from a legal guardian. Of course, sex work for minors is also illegal.

I think these CSAM laws are terrible, but at least, in Europe, the idea of protecting the children is not keeping teens in chastity.


Pedophilia is being attracted to persons under the age of 13, which isn't legal anywhere in Europe

It is in the Middle East, but their (Islamic) culture doesn't consider it as pedophilia, and (some) men in their 20s and 30s get married to young girls of around that age


Pedophilia indeed is referring to sexual attraction to prepubescent children.

Hebephilia refers to sexual attraction to early pubescent children.

Ephebophilia refers to sexual attraction to mid to late puberty children.

In the US, legally none of these are explicitly outlawed. There are "ages of consent" laws, which are aimed at the age of emotional maturity of the individual.

Also in the US - people like to label violators of age of consent laws as 'pedophiles', because that label connotates the worst of the worst sexual impulses. But in fact, they're conflating the two things often enough - a 19 year old having sex with a 14 year old is most definitely not a pedophile, but he could be breaking the law still which makes him a criminal.

But hey - let's throw in one more twist here. 100 years ago or so, said 19 year old would have been committing pedophilia, because the age of puberty has dropped drastically since then.

All that said - I'm not going to die on the hill of suggesting an Ephebophile is less bad than a Pedophile. Just that most people are using the latter term incorrectly, and conflating it with an age of consent violation.

...And for the folks downvoting me:

https://en.wikipedia.org/wiki/Pedophilia#Misuse_of_medical_t...


Even when the age of consent in Spain was 13 (now's 16) I wouldn't call that pedophilia.

No, pedophilia is not legal in Europe. Also, having an age on consent of 16 doesn't mean you are legally able to watch illegal porn made from 16 year old people, the actors must be 18. Also, there are some issues if you are in a position of authority over your partner.


> 14-16 year old children to have sex with adults.

> change the law so pedos..

> pedophilia is legal in most of Europe.

Sorry, but this language is imprecise and intentionally deceptive. The biological definition of "child" is that of an individual between birth and puberty- i.e. an individual who hasn't reached biological sexual maturity.

A pedophile is someone who is attracted to prepubescent individuals, below the age of 13. Someone attracted to sexually mature teenagers is not, technically, a pedophile, whatever moral panic Americans might display about it. Don't get me wrong, I agree it's, well, seriously questionable. But, as EU law recognizes, it depends on the circumstances, the age difference between those involved, etc- it's not a clear cut case of being sick monsters.


Your argument seems to be based entirely on your disagreement with some European nations about what the age of consent should be. Why is your opinion more correct than theirs? Why is it obviously wrong for a 40-something year old to have sex with a 16 year old, but if the latter was two years older it obviously should be fine?


I'm bothered that they're pretending they care about children when they don't. It's insulting to victims.

When my girlfriend was in school she was groomed and sexually abused. This is not uncommon here in the UK. A lot working-class school girls her age were and are groomed and sexually assaulted by adult men every year. But what's sad is that there's little she or anyone can do because in the UK our politicians believe 16 year old school children can consent to their sexual abuse.

And I'm not talking abuse as in sending nude photos. She was taken to a hotel by a married man who she thought loved her for sex, then she later found out he was just using her and other "English slags" for sex.

I'm not saying the law in regards to the age of consent is wrong – perhaps you can reasonably argue that she at the age of 16 should have known better. But I will get annoyed if my country is going to continue to turn a blind eye to the physical sexual abuse of children while making out that this is an urgent problem.

It's just insulting and hurtful to suggest that at the same age if instead of being "raped"[1] she was just coerced into sending nudes online that the state would have cared.

[1] I understand legally the UK doesn't consider an adult man exploiting and having sex with a 16 year old rape, but regardless this how my girlfriend feels about what happened to her so I will use that word anyway.


> I'm bothered that they're pretending they care about children when they don't. It's insulting to victims.

First, let's suppose that the EU doesn't care at all about teenagers being sexually abused. Therefore they don't care if children are sexually abused? There are no children who are not teenagers?

>A lot working-class school girls her age were and are groomed and sexually assaulted by adult men every year.

Yeah, no doubt. A lot of women of all ages are sexually assaulted. Therefore sex should be illegal, right? Otherwise we don't care about the victims of sexual abuse.

>She was taken to a hotel by a married man who she thought loved her for sex, then she later found out he was just using her and other "English slags" for sex.

Sorry your girlfriend feels otherwise, but that's not sexual abuse, and it has nothing to do with whether she was young or not. There's no age when a woman cannot agree to have sex with a man who says he loves her but actually he just wants sex; in fact, it's quite a common occurrence. It's not about sex, it's about experience; when you're inexperienced or trusting it's easier for others to take advantage of you. It's also never going to be possible to make it illegal for people to lie to each other. Or rather, it can be made illegal, but it can't be stopped. If I want something from you and I have to lie to get you to give it to me, I'll lie. I might have tricked you into giving it to me, but I certainly didn't abuse you. It's only abuse if I take it from you by force or coercion.


[flagged]


Yes, "why is X wrong?" and "I like X" are equivalent sentences. You know, I can understand that you find it difficult to articulate why you find it icky for grown men to have sex with teenagers, but you aren't obliged to say anything if you have nothing to say, when someone asks why that is wrong. You don't have to make an ass out of yourself.


> why you find it icky for grown men to have sex with teenagers

Michael Bluth: Okay, you know what you do? You buy yourself a tape recorder, and you just record yourself for a whole day. I think you’re gonna be surprised at some of your phrasing.

Tobias: Butterscotch! Wanna lick?


I have a different proposal to solving the privacy vs enforceability issue: require a license for strong encryption.

Any Tom Dick or Harry can have encryption of a certain strength, chosen so that the cost of breaking it is around some dollar cost over the next N years. Low enough that the NSA will do it for terrorist suspects, but high enough that any mass surveillance is infeasible. Also, require court warrants for accessing this data, or attempting to crack the cypher.

There will be applications where this is insufficient, say e2e communication with banks. These can be licensed and supervised.

I think this provides a decent tradeoff to a bunch of problems:

- people can still enjoy fairly strong encryption

- states can't spy on people willy-nilly

- but will be able to do it, with a lot of effort, where they are really motivated

- sure, "bad guys" can still download GPG and encrypt something with a very hard cipher - but that in itself will be an offense...


Encryption is either strong or it isn't - there's no room for a "breakable by governments but not by anyone else" tier of encryption. We learned this lesson the hard way with "export grade crypto" - i.e. the reason why DVDs were so hilariously easy to decrypt. The safety margin of how many bits there are between "breakable with a smartphone" and "unbreakable by every computer on earth" is very thin, and it shifts a lot, so the NSA has no advantage over 'bad guys'. The moment the NSA can brute-force a key so can anyone with a botnet of compromised AWS instances.

Alternative proposals to force messaging apps to provide decryption keys for otherwise securely encrypted material aren't any better, because computers and encryption algorithms cannot read and understand a court order[0]. So if the keys are held by the government then they can decrypt everything without actually needing to ask someone else and prove that all the constitutionally-mandated procedures were followed. If the keys are held by the communications service, then they can at least theoretically refuse an illegal order[1], but then they run the risk of insiders stealing people's secrets.

Remember that the push for E2E encryption started specifically because of two related trends:

- Intelligence agencies doing incredibly sketchy shit to bypass encryption at major tech companies (e.g. that NSA diagram explaining how you can get implants into Google datacenters and bypass the encryption)

- Companies wanting to protect the privacy of their users from themselves, because they hire a lot of people with potentially too much access to the production equipment.

Banning E2E, going back to "export grade encryption", or demanding decryption capability rolls both of these back significantly. You can engineer a key escrow / backdoor system to resist one particular kind of attack at the expense of making another easier, but you can't eliminate both while maintaining nonconsentual decryption capability.

[0] This is a corrolary of that old IBM saying: "a computer can never be held accountable, therefore a computer must never make a management decision".

[1] Let's put aside the whole "no expectation of privacy on metadata" nonsense for now. While that is already a privacy nightmare, opening up communications content to unlimited decryption is a bigger can of worms.


You may want to read up about the so-called "crypto wars" of the 90's. You'll find it illuminating.


This is a terrible idea. Today's "fairly strong encryption" is tomorrow's broken encryption.


Well, assume some Moore's law increases.


It's also assuming that only government entities could afford to build a cluster that could crack this kind of encryption.

Criminal enterprises don't need to run mass surveillance, but they may well find it worth paying to crack encryption for very specific targets.


The criminals don't even need to build it. They can just repurpose a DDoS network...


Not sure if OP is serious or all above is a sarcasm.


or how about this: government stays the hell away from me, and can come knocking on my door once they have some evidence of wrong doing


Everyday we slip closer and closer to the abyss of authoritarianism propped up under the guise of "security" by the thunderous applause of the populace.

We didn't need a whole Star Wars trilogy to tell us why that's a bad idea...


Poe's law vibes...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: