Wild how out-of-bounds it apparently is to say, but even if age verification was empirically proven to protect kids, I’d still be against it.
It's taboo in our culture to say this, but what keeps me up isn't just what people are afraid of; it's how far they’ll go to feel safe. That’s how monsters get made.
We’ll trade away the last scraps of online anonymity and build a legally required censorship machine, all for a promise of safety that's always just out of reach. And that machine sticks around long after anyone remembers why it was built, ready to be turned on whoever’s out of favor next, like a gun hanging above the door in Act One.
But say this out loud and suddenly you're the extremist, the one who "doesn’t care about kids." We’re already past the point where the "solution" is up for debate. Now you just argue over how it'll get done. If you actually question the wisdom of hanging surveillance over the doorway of the internet, you get boxed out, or even labeled dangerous.
It's always like this. The tools of control are always built with the best intentions, then quietly used for whatever comes next. History is clear, but polite society refuses to learn. Maybe the only real out-of-the-box thinking left is not buying the story in the first place.
> The tools of control are always built with the best intentions
Not to be overly pessimistic, but I'd say tools of control are only occasionally built with the best intentions. Normally they are built with, though maybe not the worst but certainly bad intentions. Good intentions are the marketing spin that comes after the fact to ease adoption like lube on blunt object headed in your nether regions.
> We’ll trade away the last scraps of online anonymity and build a legally required censorship machine, all for a promise of safety that's always just out of reach.
Good example would be EU's proposed "chat control" regulation. Wiretapping every (even encrypted) channel for off chance illegal material might be shared.
> if age verification was empirically proven to protect kids, I’d still be against it.
It's really wild. Imagine a hypothetical ideal implementation. ZKP. No privacy issues, completely safe. And yet people are STILL against it. I can understand pro privacy advocates but I really don't know what kind of person would think this.
What's extra wild is there's no justification given for this in your comment. There's some completely unrelated stuff about censorship and anonymity. The point per headline is no privacy issues, you get to keep your privacy.
They are against it because they know a bait-and-switch set up when they see it. The people promoting this hardest are not concerned about child safety, what they want is a monitoring/censorship system for the internet for the in vast political and economic control it would enable. Even if they did implement a perfect ZKP initially, it would not be long before it got diluted and eventually becomes a full-on tracking system.
ZKP still needs a provider I trust. Sure, the same is true for the web of trust, but it doesn't consist of shitty ID providers. And companies currently jumping on that train for bucks certainly aren't really convincing.
And even then data leakage is entirely possibe, perhaps even very likely.
There is plenty of justification. It allows government to put up and demand arbitrary walls. A single legislative change and any criticism is illegal. It is not a power government should have. I don't think people supporting such efforts really thought about implications.
What about bots accessing content? The good ones I mean. You would cement knowledge to large data hoarders. You give big tech power with that. I could write whole pages...
The history on this is really clear: when you create a "for your safety" speech control, it gets used for all sorts of other stuff.
In my dataset of free-speech limiting examples for safety reasons, 89% eventually expanded in scope to limit speech relating to LGBTQ, feminist, women's health, and politics. This isn't a hypothetical - it happens over and over again. Each time we have folk pointing this out, and each time we have people saying, "You're overreacting."
ZKP or not, if you make Chekhov's gun, someone's going to use it. Privacy isn’t the point. Unless your ZKP also magically prevents scope creep and political misuse, hard pass from me.
It’s like going to the gunsmith and saying, "Don’t build Chekhov another gun, you know it’s going to go off," and he just shrugs and says, "There’s no way it happens a twentieth time."
Can you overview how this limits free speech? And where do you find examples similar to age checks in the past for it to be checkov's gun?
When you say even if this helps protect the children (which from what I've seen, probably yes) you are against it anyway, I would put your objection before the end of that sentence
I'm going to give you a homework assignment. Find twenty examples of free-speech limiting laws, policies, or practices. Then go through each one and determine if it was also used to limit political speech, feminist or LGBTQ related speech, or information on women's health. Report back with your findings.
This exercise will explain the answer to your question and give you the background needed to understand it. Please, don't be willfully ignorant to prove a point.
You still trying to prove a point that free speech limiting laws will limit feminist/LGBT topics. I literally never argued with that. Try to prove how age screening limits free speech and we can continue
I see what you're doing here. You're narrowing the scope of the argument in order to make a specific point. This is a common rhetorical move, especially in legislative debates: "Because this exact bill hasn’t passed, you can’t point to any real-world failures." I get it, and I understand your logic, but it misses the bigger picture.
The conversation is about the broader category of laws that limit speech. Treating age verification as if it's completely separate from the patterns and risks we've seen in speech-limiting legislation is, honestly, a weak position. Age verification requirements aren't immune to scope creep or abuse. They’re subject to the same risks as any other law that restricts speech.
I know you can make a stronger argument than this. Besides, if you did your homework, you would find the examples you're looking for.
To clarify, I am not suggesting this is a slippery slope. That was your characterization, not mine. My point is that, based on available data, laws that limit free speech are applied in ways that restrict political, reproductive rights, feminist, and LGBTQ-related expression 89% of the time.
It appears that there may be some misalignment in our discussion. When prompted to share your perspective on the topic, you have chosen not to engage directly. Additionally, the conversation has shifted scope several times, which makes it challenging to address the core issue. If you would like to discuss the topic further, I am open to continuing. If not, I respect your decision to move on.
How is age verification in that category? I ask repeatedly but you can't explain.
Your argument is "scope creep". I said it applies then to every law, the first time you have laws you have opportunity for scope creep. You deny. So apparently there is a line for you. Why is this age verification crossing the line but something else doesn't? Is it just because "it was this way before so I don't want change"? Anything else?
We don't always agree, but this is spot on. Administrations change, but system of surveillance and control persist. Just because we can imagine a way for a system to do good does not mean it will mostly be used for good.
Can you explain why you think the online world should be so different to the physical world, which is full of places where I need to use an ID to get age-limited items? I really don't feel monsters were made by stopping children buying porn and alcohol. Do you think those age restrictions should be removed too?
Particularly as more of society moves from physical to virtual.
If bouncers copied my id, my home address and a bunch of private data every time I went to a bar I'd never go out.
This whole premise is absurd. There is tons of research and empirical and historical evidence that living in a surveillance state stifled free expression and thus narrows the richness of human creation and experimentation.
How old are you that you think constant surveillance is any kind of way to live? It's a thin gruel of a life.
This seems like such a lost cause to carry on about.
The fact that the post originates from a what appears to be an furry-aligned individual is probably not going to help get a majority of people to be sympathetic.
There appears to be no formidable organized resistance against the recent decades of surveillance boom.
With tech and many tech employees actively accelerating surveillance.
Horrible? yes. And extremely unlikely to be rolled back anytime soon.
(Disagree? I'd love to believe you are right!)
Lost causes are worth fighting for and keeping in the public eye, in the history of ideas being written off or challenges being considered impossible to overcome many end up swinging back hard the other way as long as they are ripe (the framework is preserved and still championed) and there is an inciting incident that swings public sentiment.
Defeatist attitudes and throwing in the towel almost never makes sense, engaging at a lower degree of time commitment is sensible for some, the meta commentary about it being hopeless is one of the worst types of self defeating comments a person can make, especially if they aren't in opposition, whose time are you trying to optimize with the comment? To what point and purpose are you saying this except to further deflate sails on an already still day?
The zeitgeist isn't purely rational or stable, change is often non linear, I've seen small subcultures with "impossible" headwinds completely own the space within my lifetime, we're just at the heel turn now and it's not universally popular, many people don't speak up because they are just getting vpns or moving to other forms of non-violent non-compliance.
I suspect a lot of doomposting online is someone writing down their negative self talk hoping some stranger will finally provide a convincing argument that they can use to fight their own feelings on the matter. It's like... involuntary group therapy?
You keep making this comparison, but it's not appropriate. The closest real-world analogy: in order to buy alcohol, you need to wear a tracking bracelet at all times, and be identified at every store you enter, even you you choose to purchase nothing. If our automated systems can't identify you with certainty, you'll be limited to only being able to do things a child could do.
And the real world has a huge gap between a child and an adult. If an 8-year old walked into Home Depot and bought a circular saw, there's no law against it, but the store might have questions. If a 14-year old did it, you might get a different result. At 17, they'd almost certainly let you.
The real world has people that are observing things and using judgement. Submitting to automated age checks online is not that.
It's appropriate (to me) as a limit society has decided it wants, and we should consider if there is a reason similar limits should, or should not, apply to the internet. The whole article we are discussing is how that could be implemented in a much more privacy-sade way.
But my point is that it won't be. The laws are getting passed, and there is no privacy preservation, there are no ZKPs, there's nothing except "submit your ID". You keep holding out for good faith, but the folks making the rules aren't acting in good faith. I very much appreciate the discussion here, but I think we're coming into the discussion with a different set of priors, so even if our values match, we might not agree.
Just to emphasize the point, the EU's age verification laws are actively preventing Android users from utilizing third party app stores because the implementation is tied to Google Play integrity services.
> Can you explain why you think the online world should be so different to the physical world
When you show a bartender your ID to buy a beer they generally don’t photocopy it and store it along with your picture next to an itemized list of every beer you’ve ever drank
> online world should be so different to the physical world
If you take a step back, they are _very_ different, in myriad ways. But to answer your question very concretely: because we're turning the web into a "Paper's Please" scenario and the analogy with "I'm 12 but I can't walk into this smoke shop" doesn't hold. I shared a story on HN that didn't take off about how Google is now scanning _all_ YouTube accounts with AI, and if their AI thinks you're underage, your only recourse after the "kid limit" your account is to submit a government-issued ID to get your account back.
This has nothing to do with buying cigarettes and alcohol. This is about identifying everyone online (which advertisers would be thrilled about), and censoring speech. In short, the mechanisms being used online are significantly more intrusive than anything in the real world.
I'm happy to discuss they are very different, and I agree the current systems (in the UK in particular), are awful.
However, I think tech people risk losing this battle by saying (it seems to me, and in the post I originally replied to) "any attempt at any age checking on the internet is basically 1984", rather than "we need some way of checking some things, keeping people's privacy safe, this current system is awful."
Of course, if some people believe the internet should be 100% uncensored, no restrictions, they can have that viewpoint. But honestly, I don't agree.
I'm a huge proponent of legislation that requires sites so send a header that indicates they are serving adult content in that request. I'm also a huge proponent of basic endpoint security that allows a parent to put the device into a mode that checks for those headers and blocks the response.
This doesn't require any of the draconian 1984 measures that folks are insisting upon. The problem is there is no real incentive to implement true age-verification in this manner (this is why nobody has deployed ZKP), but rather to identify everyone. So while it would be ultra easy to imagine an onboarding scenario during device setup that asks:
1. Will this device be assigned to a child?
2. Supply the age so we can track when they cross over 18
3. Automatically reject responses with the adult header and lock down this setting
But Google and Apple won't do that, because they don't care, and the politicians won't bake it into their laws, because they don't are either: their goal is to alter culture, and protecting children is just an excuse.
The issue is, it's not feasible to enforce these sorts of bans because the internet is too vast. Yes, you can stop people from visiting PH or any of the big sites, but for every big porn sites, there will be thousands of fly by night ones looking to make a buck. Age verification laws create a market for such sites which can be ran out of jurisdictions that the law can't control.
So next, we better make the devices age gate their users with attestation and destroy people's ability to use open operating systems on the web. Maybe for good measure we tell ISPs to block any traffic to foreign sites unless the OS reports that attestation.
But people are using VPNs to bounce traffic to other countries anyway, so now we need to ban those. But people still send each other porn over encrypted channels so we need to make sure encrypted platforms implement our backdoor so we can read it all, on top of on-device scanning which further edges out any open source players left in the game.
> But even if age verification was empirically proven to protect kids, I’d still be against it.
Even with an effective implementation via something like zero knowledge proofs? It seems like it's entirely reasonable to say your position is (in this hypothetical) objectively wrong?
Like arguing that even if we know firefighters save lives, you'd still be against it, because "fear and the desire to feel safe are how monsters are made".
I disagree with these policies (because they aren't safe and I disagree that children in a danger best prevented through this kind of measure), but I also disagree with you vehemently. If I'm wrong and we can genuinely prevent harm and the worst cost is an inconvenience (again, without the risk of data leak), then I'm wrong and we should do it.
The parent is not saying they’re against protecting kids; the parent is saying even if these measures do protect kids (which is disputable) they’re against them because of the side effects. For example, it’s a wedge to let governments and corporations de-anonymize the internet and snoop on everyone.
Not hard to imagine that kids in North Korea are exposed to less web porn. Doesn’t mean we want to live in NK.
TFA is talking about how to do it without the side effects. In light of that context, saying that you do not care if it is proven that it is a good thing to do and you can do it safely, is objectively the wrong take. You'd be hurting people for no reason?
Just because you say it protects the children, doesn’t convince me it does. I think side effects is doing some heavy lifting, far beyond the scope assured anonymity, which seems questionable on its own, for this to be a reasonable hypothesis
To just go for the extreme: murdering someone for no reason is objectively wrong.
The hypothetical that was brought up stops this from being an opinion and moves it into plain fact territory. If you can prevent harm with no downside, doing nothing is not an opinion, it's pretty clearly just immoral.
> Even with an effective implementation via something like zero knowledge proofs?
Someone is still controlling the execution of this proof. It's possible to deny people access to gated information. It's not about protection. It is about control.
From this week's news, US prosecutors are subpoenaing a list of every person who attended a gay drag show—under a law ostensibly written to effectively enforce age limits for "adult, sexualized" content[0,1].
You cannot separate the social context from the technical problem; or pretend that if you've designed a cryptographic protocol in some Platonic model reality, you've also solved some real problem in the real world. These things are privacy footguns because people want them to be privacy footguns—they're constructed that way, intentionally. The lack of privacy, the deterrent potential of public shaming, is a desired feature for many of the people pushing these things.
The error is in assuming that privacy is a common, shared value people agree on—a starting point for building technical solutions to. It isn't. It's an ideological dividing line.
> "Just like the Kids Online Safety Act (KOSA) and other bills with misleading names, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy... EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database."
I don't see how a scheme where you allow the generation of multiple tokens will be practical when the token itself has value decoupled from the concerns of the generator - such as when the token doesn't give access to your personal account.
If the token signifies you are 18+ and nothing else and if the generation limits are such as to be reasonable then people will generate some fraction of their total tokens just to sell them, or use their elderly relative's tokens.
The kids will be trading these tokens to each other in no time. Token marketplaces will emerge. The 18+ function of the token will just become a money/value carrier.
If you limit it to one token per person, the privacy implications will be devastating. All online presence where being 18+ is required will be linked.
I'm not on board with age verification at all. Even if it can be done in a private way. I'll just VPN or something, as I'm in the EU and they're dumping this crap on us now.
I'm more than old enough for anything and I have never been 'carded' in my life. In fact I rarely carry ID anyway (even though it's mandatory). Not going to start now.
Right. There's still something I found unsettling about performing searches without restraint on Kagi (which, until recently, absolutely required being logged in) that I wouldn't have thought twice about on a common search engine.
Unfortunately, the VPN experience has been deteriorating quickly as BigCo and BigGov have been catching up in natural escalation.
well, given the pervasiveness of KYC requirements these days, i reckon that would still feel not unlike being required to log in in order to use a search engine.
moreover, it's already fairly common for web service operators to proactively block/shadowblock swaths of VPS ranges.
I wouldn't call it a "good" fallback, but i do have a VPS handy with an always-on squid proxy (remember to bind only on localhost and use via ssh tunnel, or some other secure method, if anyone is going to get ideas from this comment) among the other things i use my VPS for.
I do find that different subsets of services tend to get blacklisted.
Eh, it's still tricky. Visiting from a VPN gets you subpar experiences in around 30-50% of sites, I would say. From search engines that rate limit you to one or two searches per hour, to things like spotify simply not working. Forums, social media & co that aren't doing verification will also throttle you, shadow ban you and so on.
I get why some sites use these kinds of IP filtering, but the net result is sadly bad for anyone trying to do this.
Even with a "privacy-preserving" mechanism, I'd remain worried about censorship risk. Are you a government, and you want to punish one of your citizens without lifting a finger? Then deny them the ability to verify their ID with anything!
In principle, you could probably cook up some mechanism to prevent this. But then the information would also be irrevocable in case of error, which I doubt governments would accept. Not that ID verification is a foolproof proxy for the actual physical user in any case, short of invasive "please drink verification can"-style setups, which I worry might look tempting.
My reading of the EU proposal has licensed third parties doing the age verification step.
The gov't could threaten to revoke the license, but doing so would inconvenience all their users, not just the target. So the third party has leverage to dismiss the gov't.
Of course lots of factors in play, but should be at least a bit better than the gov't doing the age checks.
At least in the U.S., the experience is that businesses will do a lot of things if some level of government 'politely' asks them to. "This account is fraudulent, please delete it." (Or perhaps by waving the stick of "for reasons of national security".) The business doesn't really have any incentive to get in a fight over it, especially if the target wouldn't look sympathetic in the media. I haven't heard much suggesting that typical EU businesses are any different in this regard.
In a idealistic (unrealistic) system that might work. Might, because there are still many unknowns. But it is not too relevant because it simply doesn't work.
The only thing we need and should accept is websites putting a content flag on their website or apps that any child restriction software or addons can read and either allow or block. It is a parent's job to limit what their children access, it is not the governments job to rubber pad the entire world so you can just let your kids run around like feral pigs.
To get to the gist; you shouldn’t need to show pornhub your ID to verify your age. You should be able to verify your age with an identity provider that issues you a signed token for example.
The signed material does not contain any identifiable information about you, and sites like pornhub can verify the token with the identity provider to verify your age.
This is an improvement because only the identity provider(s) have your ID, but now you also have a central database of all the age-verification-requiring sites that many people use, along with those peoples’ ID.
You could argue that the sites requesting access tokens won’t be cached/s but I’m practice that’s not how it’ll work. You could also have a separate request-forwarder service that sits between the age-verifier and the site-that-you-don’t-want-logged, I guess, which would make it harder to get all the required info in one place.
What answer are you even looking for? There’s no proactive law enforcement waiting to bust down your door if you give underage kids alcohol. (Note this is true of nearly all crimes.)
But if a kid dies of alcohol poisoning or drunk driving, you can certainly get in serious legal trouble. Those two things (not wanting kids to be harmed by alcohol, and not wanting legal trouble) stop a very large number of adults from giving minors alcohol.
What stops adults from giving children drugs and alcohol?
You put severe penalties on the crime, then you catch people doing the crime. Offer a reward fo catching people, and I'm sure a few kids will hand people in for the reward. They'll be able to prove they got a token from someone (as they'll have it), then we investigate.
Tokens need to be single use or you create a new side channel already. Time limiting is also a challenge without creating a side channel, though possible with a similar mechanism to 2FA.
No, they do not provide a raw token you forward to the relying party, so it cannot be looked up later. And the data they do provide is compared against a public key that guarantees it is non-unique to you.
Look up Zero Knowledge Proofs, or Kagi's Privacy Pass, if you want to see details.
The pro-age verification folks have been talking about ZKPs for years now. Here’s one of the legal proponents of the Texas law, and now General Counsel at the FCC, referencing ZKPs[1]. More sophisticated folks have been pitching actual implementations for a while.
Setting aside whether age verification is desirable or a net benefit, some of the discourse is colored by folks that want to make it as painful and controversial as possible so they don’t have to do it.
I totally agree with the author's main point: if we must do "age verification," we should do it through third party identity providers and not directly give our information to everyone.
I have a semantic question, though. If I get tokens from an identity provider which I then pass to an adult website, is that really a "zero knowledge" proof? It's been a while, but I don't think that's a zero knowledge proof. Or maybe it is? I'm not sure what the formal definition is.
Yes, it's zero knowledge as long as you don't consider knowledge of which provider you use to be knowledge. Which, if used at scale, shouldn't be.
The token isn't one that you receive and use as-is, so there's no way for the token's generator to tie it to your identity. And when redeemed, the generator can only confirm the token is valid, not that you made it (and therefore what service you're using). Kagi has some articles on the technical details for their "Privacy Pass" feature.
However, using a VPN and pre-generating tokens is still recommended to prevent side-channel attacks based on timing.
The problem with de-anonymising the internet is that I don't think the potential risk (my id becoming public if the id provider is hacked) is worth the potential good (preventing kids from adult experiences online). Is that position ok? Do I have an ability to avoid that risk? When was the case that 'we must do agree verification' proven? If it wasn't, what exactly is going on?
So, I don't accept that this is even an acceptable idea. I hate that we are attempting to 'solutionize' on top of bad assumptions, as with this well-meaning article.
The real issue is that there is no proving that this is a 'good thing' to be done -there is no discussion of the loss of privacy rights. It is already decided that de-anonymising is a good thing for corporations and governments, so the rest is just excuses.
This is actually use of manipulation on the part of governments to trick and coerce individuals into an action they do not want to take. Therefore, thoughtful talking about how to 'mitigate the risk' is the equivalent of negotiating with kidnappers over the ransom, when the right answer is: no coercion. The answers to these questions should be that those who want them opt-in, not forcing risk on everyone.
One of the most ridiculous things about age verification are the assumed ages for using things. For example, recommended Age Ratings for movies way overestimate the age somebody should be to watch things. I was watching NC-17 movies at the age of 7. Powerful experience, but I grew up a normal person. I still remember being 10 and thinking how ridiculous the PG-13 and R classifications were to the level of maturity I already had at that age. Thankfully I had parents who didn't care and I could watch whatever I wanted.
Kids are generally way more resilient than they get credit for, but not all kids are the same.
I have two, one of them was fine watching people's faces melting in Raiders of the lost Ark when he was 6, the other had nightmares for a couple days after seeing Gollum in LoTR.
The regulations on age are by necessity arbitrary, but I don't think they're completely stupid, even tho I agree parents should be the one responsible in the end.
The end game is naturally to have all your online activity to be associated with your real ID. The government wants this, Big Tech wants this, thus there are no real barriers and, in a frog boiling way, it will be done at least for the majority of users.
in around 2013ish I've worked on a ZKP based SAML-like authentication scheme where almost nobody knows anything:
- you could use your corp ID to log in to pornhub, as the provider doesn't know to whom it verifies the request
- pornhub wouldn't know you used your corp ID
we got as far as a demo out of it but never commercialized as far as I know.
this was after there was a trial project with the UK about ZKP based age verification as kinda the next step where you could verify more than your age online.
Interesting - did anyone write up how it worked anywhere?
I also worked on a similar system in 2015, which provided anonymity and unlinkability in almost all interactions (you don't know who it is, and you don't know whether the anonymous user is the same one you saw last time).
You did have to pay for the service of course, but it issued blind signature tokens for access (similar to what is described in the article). So the service did not know who actually did what.
It could also provide anonymous attestation of some attribute (like age). This was a bit more efficient and secure in that you did not need to store a bunch of tokens. It could transform the proof to be unique each time (thus giving unlinkability). It would only work if you had access to your private keys (so you could not just give your age proof token to a kid - you would have to give them your entire account and keys).
This would be great and all, but all parties who are in a position to choose to implement this kind of system or to keep the status quo are already motivated to keep (and expand) the existing systems, for any number of reasons. Everybody (except the end users) loves to keep that juicy metadata and incidental logs of everything.
The corpos and people lobbying for this "age verification" aren't interested in child protection. They are just abusing child protection for their own gain: data collection. There is safe and secure tech to verify age without sending personal data. But the same lying assholes who claim to do it for the children are just vile agents of surveillance capitalism.
it is designed to be a privacy footgun. this wave of age verification bullshit is their foot in the door for "login with your government-issued ID". anonymous rabble congregating on the internet, spreading malinformation and expressing illegal opinions are extremely dangerous to our democracy. the ETA is 5-20 years until another wave of "safety" laws that will require your real identity to be linked to every clearnet website you interact with.
It's taboo in our culture to say this, but what keeps me up isn't just what people are afraid of; it's how far they’ll go to feel safe. That’s how monsters get made.
We’ll trade away the last scraps of online anonymity and build a legally required censorship machine, all for a promise of safety that's always just out of reach. And that machine sticks around long after anyone remembers why it was built, ready to be turned on whoever’s out of favor next, like a gun hanging above the door in Act One.
But say this out loud and suddenly you're the extremist, the one who "doesn’t care about kids." We’re already past the point where the "solution" is up for debate. Now you just argue over how it'll get done. If you actually question the wisdom of hanging surveillance over the doorway of the internet, you get boxed out, or even labeled dangerous.
It's always like this. The tools of control are always built with the best intentions, then quietly used for whatever comes next. History is clear, but polite society refuses to learn. Maybe the only real out-of-the-box thinking left is not buying the story in the first place.
reply