This sort of happened to me once. I was one of the first to identify children's fiction author Robert Stanek as both a terrible writer and an Amazon astroturfer. My blog post about him was widely cited by other critics as they discovered the same things. Eventually this led to posts on a couple of his "fan" sites - almost certainly set up and most content written by Stanek himself - including my home address and accusations that I'm a child molester. He also tried sending me emails threatening legal action. Besides the claims being utterly fraudulent, the letters were so clearly fake that I replied telling him to piss off (in exactly so many words). With a little effort I succeeded in having the posts taken down, or at least made invisible to Google. The more generic attacks continued on and off for years, again as more people discovered the now-numerous accounts of his transgressions and re-triggered Stanek's delusions about a cabal, but it has been at least a couple of years since the last one. Who knows, maybe he'll try again now.
The point is that these kinds of revenge campaigns have been easy for a long time (my blog post was in 2003) and keep getting easier. Using Pinterest in this case seems like an interesting twist, since Google's suspicious love for them is already a matter of some discussion. It's also interesting that an NYT reporter takes a so-predictable swipe at Section 230 as part of the story. While I wouldn't be in favor of holding platform providers responsible for users' content, I do think defamation deserves to be treated with at least the same seriousness as copyright violation, for which special provisions already exist.
P.S. I'm well aware that Stanek accuses me of being the defamer. Such reciprocal accusations are practically always a feature of these cases, including OP, and that should be kept in mind when considering them.
Assuming your statements are true, username checks out.
How did you go about getting the posts taken down/made invisible? I have yet to tick someone off enough for them to try something like this, but knowing myself it'll probably happen someday.
> Assuming your statements are true, username checks out.
Heh. Thanks for the chuckle.
> How did you go about getting the posts taken down/made invisible?
I made my own legal threats, basically. One site was a ProBoards forum, so I contacted them. I didn't pretend to be a lawyer or anything, just told them I'd get one if needed. I never received any acknowledgement, but the posts did disappear from view. I think they were made private, not deleted. Good enough as far as I was concerned. The other site was kind of a cross between Wikipedia and Quora. In that case I found the site owner and sent a similar email. That time I did get a quite-reasonable response, and the article was taken down.
I haven't been involved in a lot of legal stuff, on either side, but enough to know how it works and not fear the process. Once someone realizes that you have the resources and determination to drag them into actual court, they tend to think pretty hard about what persisting might cost them - in this case possibly in terms of reputation even more than money. No surprise that he/they folded quickly.
Statute of limitations is 1 year, I was unaware for longer than that. The Internet means it’s without limit. A year of last access? To court records with fraudulent statements? Forever. The lawyers know they can get away with it now. I don’t know the scale, but I sit on a hill right now seemingly without remedy. I am a forgiving person.
> ... letters were so clearly fake that I replied telling him to piss off (in exactly so many words).
In this case I wonder if you should contact the law firm the letter is purportedly from (or Stanek if the firm doesn’t exist) warning them that some idiot is impersonating them and sending ridiculous letters, with a copy of one attached. Even if they really sent them it feels like an amusing way of telling them that their allegations are absurd.
I think I saw an example of this happening in another case, perhaps for a letter received by a baseball stadium.
I quickly determined that there was no such entity; that's part of why I felt completely comfortable responding as I did. Here's another example of fake legal threats in this affair, from what I would consider the canonical site about it.
IIRC (it was a long time ago) the email I received wasn't quite the same, but it was broadly similar - fake identity, clearly from someone barely educated let alone educated in law, same general amateurishness as Stanek's books and reviews.
BTW, while I'm here: this deep dive into l'Affaire Stanek might seem indulgent and off topic, but it's not. This is the archetype. This is how online defamation and anti-defamation campaigns play out. They're ugly, they're messy, lots of third parties get dragged in, and people do get hurt. To a defamer, merely getting your target involved in something so obviously tawdry is almost as good as having the initial accusations stick. The internet's infinite memory can be used in more than one way to get revenge, and I think that's important to know.
> Yet even that hasn’t solved the problem. See for yourself: Do a Google search for “Guy Babcock.”
The first "traditional" result for Babcock's name is now the NYT story, but the first 5+ (even moreso on mobile) are thumbnails [0] showing his image with a text overlay of his name and "paedophile".
The first 3 thumbnails link to the sites cheaterboard dot com, cheaters dot news, thecheatalert dot com. The fourth thumbnail is a Pinterest account linking to a ripoffreport post.
All these "top" results feature the same "content" [1] – the victim's name spammed repeatedly, along with "paedophile" and "racist POS".
I know spam abuse detection isn't trivial, but this seems like the most basic kind of spam abuse that Google claims it knows how to downrank.
Yeah I’m not sure it was a good idea of the author to include a link to that search from the article (seems like that would boost the existing results).
I think I'd argue that it was a good idea. This isn't me linking to that from my blog that gets 100 views to a post. This is the NY Times linking to that. Love them or hate them, them linking to something will get attention, and I would bet that attention will lead to changes.
Some percentage of the many people following the link from NYT would do so while logged into Chrome. Or maybe Google Analytics is on the web site. Or some other Google property comes into play.
Google would then use its apparent popularity to boost its search engine ranking.
Google has to have a method of determining whether a search query originates from a manual request, or if it is a link to the results, such as here.
There are many ways this could be detected: The Referer header from old browsers, the massive amount of data that the search query URL contains (the parameters that are not "q"). Probably Google even has session tracking, and the session would be missing everything before the results page.
Given that Google applies ML so heavily, and that this query would be an outlier in so many ways, I can't imagine how they wouldn't detect this.
>> Google has to have a method of determining whether a search query originates from a manual request, or if its a link to the results
The issue was about how it could be risky to one's reputation by linking from a NYT article to a disparaging page.
My parent asked how such a risk might arise. I assumed they could not imagine how one's browser, NYT and the target site could conspire to affect rankings.
I explained how this might happen. So Google knows there's traffic. Some traffic ranks higher than none, and much traffic ranks higher than some.
So, to allow Google to be aware that a well-trafficked site is slating you, is riskier than if it's a site with few visitors because that site will now rank higher.
This is a stunning article. Imagine if everyone focused their deplatforming efforts on sites like this.
- DDOS attacks, if those don't work aggressively go after Cloudflare or their DDOS shield
- Pressure their payment processors to deplatform them
- Get Visa and Mastercard to drop these sites
- Once they have no DDOS shield, pressure their hosting providers to drop them
- Set up landing pages to bump them out of search results
- Aggressively pressure Pinterest et al. to take down any posts with "Ripoff Report" or "Cheat Alert" in the name
A similar playbook to Gab/Parler, but efforts like this would really, tangibly help people and right a serious wrong that can affect anyone, regardless their beliefs or politics.
Do you think that creating such laws magically solves the problem? You don’t think people will just adapt to accomplish their goals within the technicality of the law? Cyberbullying was a huge problem when I was growing up and laws were constantly being passed. Did cyberbullying somehow become less prevalent or their criminals easier to punish? It seems like the problem has only gotten worse.
The reason is one of organization. Gab/Parler was intentionally trying to organize white supremacists. That is the entire goal and motivating raison d’etre. Cyberbullying like this is not some shared idealogy but a personal vendetta. As such it’s impossible to solve with the legal system or technology - at best you get a punishment effect a small fraction of the time for such a small expense that punishment is reserved for extreme cases that devolve to real-world altercations as an add-on multiplier as it takes a significant amount of time and effort to convince authorities whose the party telling the truth. Slander and libel are already things but people can’t seem to acknowledge that you can’t legislate away the fundamental problem that communication is now cheaper and more widespread than ever. Imagine if you could anonymously distribute pamphlets in multiple cities with minimal effort with very difficult ways to trace it back to you. The problem here is people fail to recognize that there’s nothing technology can do to solve a fundamental human failure mode here. People get pissed off at each other, and one party starts slandering the other. The legislation needs to focus on making it easier to deanonymize. But such legislation is actually bad for the internet and doomed to fail anyway since technology makes it easier than ever in ever increasing ways. From a social well-being perspective, anonymity is critically important for those who legitimately speak truth to power (eg sources for reporters).
Sorry, I'm not sure what part of my post you're responding to — I didn't mention anything about laws.
I also don't really fundamentally disagree with any of what you say in your post, I think it's a rather accurate appraisal of the situation.
That said, as it comes to Gab/Parler, I think a lot of the problem is technology related, and could theoretically be fixed by social media platforms adopting non-user-antagonistic content-ranking models. If the algorithm behind your platform maximizes time on site, it will maximize anger, outrage, and quick dopamine hits.
If the algorithm (or moderation process) functions differently, that's not the case, and I think we see this to some extent in HackerNews (not saying it could scale to Facebook level though). I hope other "ethical social media" platforms like Okuna (https://about.okuna.io/) will follow.
> Do you think that creating such laws magically solves the problem?
Creating laws has done a lot to solve the related problem of "mugshot sites". They're illegal in enough states at this point that they're approaching impossible to run -- which is a good thing; they were a morally bankrupt extortion scheme.
It’s a bit of a stretch to claim that Gab and Parler were “intentionally” trying to organize white supremacists. But I agree with your point that people will always find a way to be cruel to one another, regardless of the underlying technology.
It's not. It's made clear in the court filings in the AWS lawsuit, as well as background research into the founder and funders, as well as the data dumps from the site hack/scrape.
The reason why we don't focus on deplatforming sites like this is precisely because it's a concrete problem with a concrete solution. If instead you're a culture warrior fighting "fascism" or "the alt-right" you'll never run out of blue check marks to deplatform and your personal influence will never stop growing.
A simple thing would be for Google to de-list slander-based sites like the ones described in the article. Or just personal name search for things like that.
At the same time, it's remarkable that people put credence in completely random sites that let anyone post anything.
Towards the end of the article they state this is what Googles has started to do with sites that make it hard to get false info removed (like requiring a fee).
We can't know which reviews are truthful. Presumably some are, some aren't. Obviously allowing the business owner to make that determination is dubious.
I can go and buy however many Yelp reviews I need right now, good or bad for as little as $5/review. Dig deeper and you realize Yelp uses the same tactics as the mafia in Italy to extort store owners and use heavy handed tactics.
Reviews are BS anyway, I don't know you therefore why on earth would I listen to your opinion on something?
Maybe knowing a reviewer actually personally would help it further by noting their consistencies or idiosyncrasies or honesty; but it's long winded method. You can glean a ton by following any person, honest/consistent or not, after all it's what's made social media owners so powerful.
I'm surprised they've allowed this to continue for as long as it has. At the very least they shouldn't be indexing low quality content from sites labeling someone a pedophile. That would go a long way towards stopping all but the most sophisticated of cranks - which the individual in the article clearly isn't.
Not because what they expose, but what they incite.
I am very concerned that instead of strengthening legal path for defamation, politicians will take the easy (and favored) path of government control of speech.
Do not misunderstand, I am not even remotely suggesting that somehow we need to protect such behavior. What I am suggesting is that the laymen will demand blood, and the easiest and most vocal way to do it is government, or agents of the government (in this case large internet content controlling organizations) will restrict speech. That is already fraught with political vengeance and revenge.
Find the culprits. Prosecute & punish. Force large internet content companies to scrub.
We can demand to force sites to pop up about cookies. Certainly we can demand that with a single quick request "to be forgotten".
(Mark my words - if this gets legs, it will be the speech control, and the control will be handed to companies or quasi-government agencies, made of companies.)
What _should_ happen is that people should not believe allegations that have no evidence.
Censorship and defamation law are just patches that we're desperately trying to use to make up for the sad fact that critical thinking was never strong in humans at seemingly any point in known history.
Yes, it’s a wonder these idiotic apes somehow created the modern world...
We are wired to live in groups of a couple hundred individuals, at most. We can “default to truth” in that setting because all of our interactions are repeated indefinitely over time, and the stable solution to such an iterated prisoners dilemma is to be honest because cheating will be swiftly punished.
Mass media, and especially the Internet, breaks all of our social brain circuitry, and that is not a trivial problem to address.
The most “revenge” thing I ever did was to send a copy of the town’s noise bylaw, anonymously, to a nearby resident who had held a very loud “backyard concert” on labour day weekends over the previous two years.
> His punishment was an anonymous threat of violence by way of institutional power
> Yes, it is a threat to send armed agents of the State to intervene [1]
This is farcical at best. No 'threat' was made, let alone any of violence. At worst, it was a reminder to the offender to get a clue in the first place, and not to do it again.
I've yet to hear of any bylaw officer, anywhere, showing up in armor and weapons in order to warn a violator, or issue any citation against one, ever.
You're probably the type that takes offense at "Beware of Dog" signs, believing that you shouldn't have to be bothered to 'beware' of anything. :rollseyes:
Shouldn't, rather, everyone else in the area, also party hard on such a holiday?
Or at least allow people who want to the fucking possibility to do it once a year?
Or should no one party loudly, ever, lest they annoy someone in the neighborhood?
I have neighbors that have large, late parties on major holidays. Being where I am, they run until 3am or so. That happens 4-5 times a year (different people). And we're talking apartment buildings, touch or above/below each other, not some US suburban neighbor, where the neighbor's house is 200ft or more from your house.
Still I could not care less. Close the windows, wear some ear plugs, and sleep.
Now, if someone did it consistently, in weekdays, and for a longer period, sure...
You "not caring" shouldn't mean that everyone else should, or need to 'put up' with someone else being abusive. I don't need to put up with someone else's loud noise. I don't need to not be able to sleep (or my kids). Someone deciding to have a party shouldn't be a burden on anyone else.
In my area, we actually have a legal right to make use of our properties without being excessively inconvenienced by others; some of it is defined by bylaws (ie. noise bylaws), and other ways as well (federal, constitution, etc.) One example: local electric company had installed a new utility pole in a location such that the wires went over someone's pool; that family couldn't reasonably enjoy their pool because of it. Utility company argued they had easement access (granted by government) and could put their poles anywhere they wanted. Court sided with family and utility company had to move the pole(s) so that the wires routed along property lines.
Besides this, doing something this disruptive to other people's lives is very disrespectful of others. Of course in some parts of the world, 'individual freedoms' seems to take priority regardless of any situation, even when it means negative consequences for others.
And when I say loud, I mean F$#@ING LOUD - I walked by their house one of those nights, and it was near outdoor-concert-level loudness. Make your ears ring in pain loud. They were down the road from us, with some other houses/obstructions blocking some of the sound, and still even with that and the windows closed, we had to turn the TV volume way up in order to properly listen to shows while this went on. I can't imagine what it must've been like for their immediate neighbours.
As someone else here stated (they had lived in Japan), they could've easily have rented out a space somewhere to do this, that wouldn't have bothered anyone. Hell, even used their own basement instead - the properties had postage stamp-sized backyards, which could accommodate 15-20, maybe 25 people comfortably; they actually would've had more space in their basement to do it.
I'm with you on this: the very occasional loud fun should, in my very humble opinion, be allowed to happen without fuss.
Where I'm from, about 3-4 times a year, people in the area will go ape wild on fireworks until the wee hours of the morning. It's not legal from both a safety and noise standpoint, but they do it responsibly, so who cares? They won't bust out the high-flying rockets unless we've had rain, they do it in their various mowed fields, they do it sober, etc.
So the only real complaint would be that it goes on until 2-3am. With a baby the last few years and agitated dogs, it's not exactly "fun" for us, but... I can live a few nights of the year with interrupted sleep while my neighbors have a grand time celebrating.
It would be one thing if their celebrations caused conflict or extremely dangerous situations—one could argue shooting fireworks qualifies—but my personal risk analysis of their behavior doesn't leave me concerned. Nobody is driving home drunk, getting into fights, etc.
So who cares?
I'm sure someone in this world may be inclined to complain about odor when I smoke meats a few times a year (I don't think any sorts live around me though), but I would expect the same kind of consideration that I extend others when they very seldomly and gently encroach on my comfort.
I guess what I'm getting at is this: it seems that the parent poster is the self-centered one. I acknowledge that I don't understand their situation, but I find it exceedingly difficult to fathom a scenario in which a single night a year is severely detrimental to their own well-being.
Let them have fun, ffs! What kind of party pooper is so self involved to do this sort of passive aggressive attack? I'm sorry, parent, but I think maybe you're the bad neighbor.
I had to deal with this sort of thing once while I was living in London. A letter was sent around, I guess up and down the street of flats (I don't know how far) which explained they were planning to have a blowout party for some significant reason and that things were likely to get very loud. There was contact info in case someone want to discuss the matter beforehand.
The party went off and it most certainly was very very loud. The place was one or two buildings down from me and I could hear it till ... rather late. The meaningful thing, how ever, was they _had_ sent out a letter beforehand ... which was pretty courteous in my view. I guess no one complained (or the English are just so very polite...) and the party did not get shut down.
The parent comment did not state anything like this... so I think his response was quite reasonable. A one off thing that gets out of hand...ok, you can let that slide in the interests of getting along. But to do it again without any sort of heads-up or something of the sort is definitely a motivator for setting some boundaries. Perhaps people complained and the party thrower got defensive and so did it again out of spite. I infer from the language used they're in America, so 'freedom!' and all that crap....
So the reason your comment is getting downvoted is quite understandable; I suggest you try to really place yourself in the actual 'shoes' of the parent poster and try to think of what they might have left unstated. The passive aggressive nature of the response is certainly something to point out, but then again, America is awash in guns. And if I am right about America, then areas in America are quite varied (I am American) and so social/cultural norms are different as well as urban/suburban/rural settings from place to place. There are lots of variables here to consider that have been left out, it seems.
I have placed myself in their shoes when I discussed huge fireworks going of near us within 200 yards until 3am several times a year. I spoke to the neighbors, analyzed the situation, and leave it alone.
I'm being downvoted because I suggested that parent put on their grown-up pants and deal with the nuisance 1 out of 365 nights of the year.
Were the partiers rude? Yes. Were they a nuisance? Yes.
Should parent feel "proud" to passive-aggressively threaten the partiers with the force of the institutional monopoly on violence?
Uh, no.
They didn't try to talk to their neighbor. They wanted to enact vengeance. (That is, after all, the whole point of this thread.) And they are proud of their escalation without attempting any other means of resolution.
So if parent doesn't want to allow a neighbor to indulge and encroach 1 of 365 nights, okay. That's their prerogative. I disagree and think it's childish, but fine.
But they removed the humanity from the situation and with absolute cowardice anonymously threatened the neighbors the third year based on the unfounded presumption that it was intentionally antagonistic, I guess?
While I understand that much of Hacker News may not want to be bothered by the noise—and thusly my suggestion to grin and bear it is grounds for downvoting—I absolutely struggle to see why they view an attempt at human-touch resolution is inferior to threatening the police on someone.
I'm sure I'll be downvoted for this comment, as well. The trend to avoid real human interaction in favor of tattling to Uncle Sam is accelerating, so I imagine my voice is unsettling.
I appreciate your response, however. I tend to get downvoted without reply or explanation, and presumably because they disagree and not because it is out of context for the discussion, as stated in the rules. But these comments are in violation of the rules, so I'll stop talking about it, now.
LOL sending a copy of a bylaw is now considered a “threat”??
As someone else said, they could have “kept the humanity” (as you state it) by either keeping it down, and warning all ahead of time (you know, keep respect) themselves in the first place.
PS others downvoting you is their “passive aggressive” way to say they don’t agree with you, and you think less if them simply because they didn’t reply instead.
Yes, it is a threat to send armed agents of the State to intervene. Is it something other than that?
I do not dispute that the partiers should have acted more considerately. But this whataboutism does not absolve the behavior of parent, unless you suggest that one inconsiderate act should beget a like response?
And you are correct: I think less of someone who is unwilling to negotiate their opposition. I appreciate your response and willingness to challenge.
I somewhat agree with you that being loud once a year isn't that bad but .... if I was to do it I'd invite the entire neighborhood (which is also ends up being notification and a chance to object)
Also, in Japan this would almost never happen. If someone wanted to throw a loud party they'd rent a space for it. That doesn't seem common in the USA in my experience. But, having gotten used to it in Japan, it's kind of disappointing that it's not more common. Sure you can sometimes rent an entire bar or restaurant in the USA but it seems super rare where as it's common in Japan. On the other hand, it's also not common in the USA to pay for a party but in Japan no one blinks an eye if someone invites you to a party and says "We're having a party Friday night. It's $30 a person"
I know it's not entirely the same because it's indoors.
since you don't know the noise levels of the fun or any other things that might have made it especially aggravating for the poster it seems slightly too negative an interpretation of things, I know I would probably let someone else slide on having big noisy parties - based on experiences with my neighbor - but part of that was the feeling of that poor guy has to be neighbors with me, can't be easy.
I feel like this would be the one legitimate criticism of Section 230, but I don't really know how you would "solve" this problem. Maybe a DMCA-esque system for defamation, but on the other hand I feel like that's also ripe for abuse and would hamper online anonymous speech. Is it fair use to post a picture of someone if the text accompanying that picture is libel?
This reminds me of those "MyLife" low-lifes who mine the internet on everyone's "behalf" and then email people telling them they have a "negative reputation".
It's possible to remove yourself so that your name does not appear when someone searches on MyLife.com by making a tedious phone-call to their call-center in India, but then they email you EVERY DAY with offers to clean up your online reputation for some fee.
The vengeance griefer in the NYT story is obviously a deranged crazy, but one person can only do so much. The guy who started MyLife however, Jeffrey Tinsley, is a multi-millionaire who has successfully gotten venture capital funding-- and causes grief on an internet-scale.
I wish there would be more media take-downs of MyLife, Classmates, Radaris, other scummy operations. Even better, I would expect that GDPR rules put in place for the USA would put these people out of business?
It will come a time in US when people will come to understanding that the Internet exists, and one does not take any random thing on the Internet at face value.
I've been on the Internet under a few identities since 1998, and has since stopped much trying hiding.
The more your online notoriety goes, the more your IRL consequences snowball.
I've been to many countries, and in most of them you cannot character asassinate anybody by randomly calling guy's workplace and telling "guy A is a dreaded pedophile!," and at most it will make people chuckle, and remark on you being famous.
> It will come a time in US when people will come to understanding that the Internet exists, and one does not take any random thing on the Internet at face value.
It's been 30 years and so far it hasn't happened.
I also very much doubt it will ever happen as this would go against all we know about basic human psychology. Rumor sites such as this one tap into the same mechanism that makes rumors, conspiracy theories and superstions effective since forever: Once people have decided for whatever reason they trust a particular source of information, it will take some exceptional events to make them distrust it again. Someone else simply stating that the source should not be trusted is by far not enough.
30 years is not a lot of time. People who grew up with the internet are only now coming into positions of power. A majority of American Internet users grew up in a very different era, and their experiences and expectations may not be reasonable in the current era. Furthermore they have been passing an understanding of media from the 70’s onto future generations.
I suspect that it will take quite a lot of time for society at large to adjust.
For example, many kids are taught in school that Wikipedia is not reliable since anyone can edit it, whereas corporate publications are more trustworthy since they are written by experts whose reputation is at stake. Wikipedia is not always reliable but it is verifiable/auditable and this is a criteria of trustworthiness that is not widely appreciated for digital literacy.
Also we should teach that these other sources on softer topics are sometimes editorialized...
I don't see any issues using it for many topics, especially in computer-science and hard sciences. But there are biases with some content.
"Corporate publications are more trustworthy..."? My brief exposure to that arena would lead me to disagree strongly (but I am an engineer and not in marketing). Everything in my experience has to be vetted by multiple layers of the organization or at least people versed in the art of saying something while actually saying nothing. "Corporate publications" is a very broad term, of course, so it depends - financial type things, yes, that has to be pretty 'solid' though there is still room to elide and be disingenuous. Whitepapers are essentially just marketing material; being an engineer and thinking like one is really detrimental, thus I just couldn't do that sort of stuff... And some subset of shareholders will always try to interpret <whatever> is any number of ways, so that is another reason to spin things as needed while not seeming to spin things.
Wikipedia has a pretty robust set of internal controls and correction policies IMO. It may not be authoritative, but it tends to be a good starting point, unless the topic is obscure etc. My brief exposure to higher education would be that yeah you don't want to let students treat Wikipedia as a primary/only source (the lazy ones have that attitude), but I wouldn't go so far as 'not reliable'.
The fundamental issue is ho do you teach people/children to synthesize different sets of information and decide what is useful and what isn't - i.e. to think for themselves? Part of that effort is also teaching them to be self aware enough to understand how their emotions or emotional issues might color their interpretation of the information, never mind evaluating the source of information first...
A lot of parents think it is a really bad idea, teaching children to think for themselves and to be self-aware...
Wikipedia's reliability largely depends on how much traffic an article gets. The more eyeballs on an article the more likely people are to point out inaccuracies. It's also much more reliable for scientific and technical subjects that aren't really subject to ideological influence or bias.
In my experience, niche pages tend to be the most suceptible to uninformed or biased editors.
>I've been to many countries, and in most of them you cannot character asassinate anybody by randomly calling guy's workplace and telling "guy A is a dreaded pedophile!,"
I don't think she did that. What she did (post tons of negative stuff online) did have negative effects when people applied for jobs:
> A relative of one lawyer said she spent months applying for jobs in 2019 without getting any offers. The woman, who asked not to be named because she feared Ms. Atas, said her bills piled up. She worried she might lose her home. Then she decided to apply for jobs using her maiden name, under which she hadn’t been attacked. She quickly lined up three interviews and two offers.
When applying for a job, the resume screeners don't know much about you, so they rely on what they find online. With coworkers who actually know you, they don't need to rely on stuff online.
The reality is that someone doing a quick Google for a resume screen, potential date, or whatever is going to devote about 30 seconds to the effort and they'll probably move on if they uncover something sketchy rather than launch into a more detailed investigation.
It comes down to what counts as admissible evidence in court.
Also on and off, countries have tried asking travelers to submit all online social n/w profiles, under threat of perjury, during immigration. Every time there is a hue and cry, it gets rolled back, but one has to be vigilant.
> people will ... not take any random thing on the Internet at face value
Have you been following US politics lately? A great many people really seem to believe that Trump lost the election only because of massive voter fraud, that his enemies are literally harvesting children's blood for adrenochrome. And we're not talking just about random nobodies here. There's at least one member of the US congress (Marjorie Taylor Greene) who is pretty open about believing this stuff, and two or three more who are less obvious about it. I've been on the internet over a decade longer than you, and the trend has never been toward people being more discerning about what they read.
I've already written in this thread about being targeted, and it wasn't even the first time. The other, on LambdaMOO, got written up in a book. OP mentions not one but several examples. These campaigns do have real-world consequences, not just chuckles and remarks about fame. Maybe your experience does not include anything like that, but please don't act like your own experience must be the norm and anyone else's must be the anomaly. That's the most egregious kind of selection bias.
"It will come a time in US when people will come to understanding that the Internet exists, and one does not take any random thing on the Internet at face value."
I have to ask, have you heard of Qanon? Do you remember what happened in Washington DC on January 6th?
Because I've been on the Internet for quite a while now, and it's not getting better. It's getting worse. Much much worse.
> It will come a time in US when people will come to understanding that the Internet exists, and one does not take any random thing on the Internet at face value.
That was the case before the internet exploded in popularity. People were very suspicious.
It feels weird how things have changed in past 20 years. Like as child we learned not to trust anything on Internet. Or to share our own information... And then everyone started using social media and sharing everything...
Maybe we had the right thing then, if you are using your real name or connectible info keep it professional. And lot of information is false... But now it seems all that has gone away...
We need to label inauthentic speech. As well as have verified authentic speech.
Not ban inauthentic speech. I'm NOT saying ban trolls, bots, insurrectionists, and vexatious people like Nadire Atas.
I'm simply saying that if we also have authentic speech, people (culture) would have a tool to discern inauthentic speech. And then hopefully wise up a bit.
What exactly is the difference between authenic and inauthentic speech and who is doing the labeling? What is stopping an "authenic" source from taking money to launder authenticity?
The term 'Authenic' is often used as a deep well of bullshit to arbitrarily filter ingroups and outgroups with about as coherent logic as middle school students deciding what is cool.
Having been involved in one such affair (see my other comments on this story), I'd put that in the "true but not useful" category. It still allows a form of blackmail. Why should the target of defamation have to pay to do this? What happens when most people's online reputation is mostly a matter of how much they were willing to pay to a reputation-washing company, and people in the lower economic echelons can't afford to play? These kinds of companies already exist, and make as much money burying genuine misdeeds or making up accomplishments as anything else. How is increasing the size and importance of that industry a good thing?
It’s just a way of breaking the “business model” for this companies.
If you pay up, there’s nothing preventing them from trying to hit you again with more demands (in fact it makes economic sense since they don’t need to generate new “content”). If you go and hire a SEO company, you’ve broken the economic reward for the scammers, or at least you’ve reduce it.
Whenever I see the name “Babcock,” I think of The Passage.
This kind of obsession is nothing new, but, is (thankfully) rare; as is the case for a lot of diseased behavior.
What is new, however, is the modern, interconnected world (air travel, the Internets Tubes, etc.).
This acts as a multiplier for this kind of thing; affording lone actors a powerful gun platform, and allowing extremists that would normally be confined in small, local quarantines, the capacity to band together.
Not sure what the answer is. There’s talk of changing Section 230; not just from authorities, but also, from many mainstream communities.
On one hand, I think that the ability for minorities to band together, and seek strength and support in numbers, is amazing, but you also get stories like this (and it isn’t really unique).
At the risk of “Godwining” the thread, it could be argued that Hitler wouldn’t have been able to come to power without a powerful and efficient propaganda machine, looking a lot like this, so it predates the Internet.
It’s a pain to remember which workaround still works at specific sites. I like archive links because they’re easy to find in the comments and they tend to work.
That's true, but I just leave javascript off for sites where that's enough. Various things break, e.g. you don't get pictures on NYT, but those don't add much to the story.
It seems to be much easier to attack a reputation than to fix it.
Like if enough women accuse Bill Cosby of something, then the court of public opinion will say he’s guilty. (I understand that in a real court he was convicted by a jury based on his own deposition from 2005 though). But on the Web, what does it cost people to just make stuff up and use botnets or GPT?
I believe in the concept of free speech, but I’ve said it before: free speech is the right to speak without going to jail, and that’s the extent of it. It doesn’t mean the right to be heard, or the right to speak without other consequences. And this is a clear cut case where the supposed right to be heard is profoundly damaging to the targeted individuals and to society as a whole.
So why don’t we apply the same standard to speech which targets larger groups? Why is it OK for platforms to publish the 30K lies [0] made by the former US president - for the profit of both the platform and the politician - but not ok for these horrid shame sites to exist? Why is it OK for Google to redirect these people and sites to /dev/null, but not for Twitter or Facebook to do the same?
Anyone who is upset by this article needs to understand that the only difference between the vengeance nutbags and lying politicians is the scale of the damage they can cause.
If you only mean that you believe in the right to speak without going to jail, and no protections of speech beyond that, then ok. I mean, I strongly disagree, but I acknowledge your position of not supporting protections beyond that.
But you should know that that isn't what other people generally mean when they say "free speech". For example, most people would also say that freedom of speech also requires that people aren't fined money for saying things that they believe, or things like that.
If by saying this you are trying to shift the public understanding of the term in a direction of such minimal applicability, I would oppose this shift.
I was being slightly hyperbolic. A proper definition of free speech - not my definition - is that it is the freedom to speak without interference _from the government_. So fines, jail time or anything else driven by the government is out. This I can agree with to a large extent.
Free speech does not, and has never, meant the right to be free of consequences for your speech, or the right to speak anonymously in public, or to have your identity hidden. Free speech does not give you the God-given right to post anything you like to any service or publication. You can’t force the NYT to put your opinion on their home page and you can’t, or at least shouldn’t be able to, force Twitter to spread your lies.
And, again in my opinion, if someone does say something harmful then there needs to be accountability. That means that if someone is hurt by someone else’s speech, then the victim should be able to sue or otherwise take action against the perpetrator.
Here’s the problem. Sites like Twitter and others can publish anonymous hate speech while hiding behind the CDA. This means neither the anonymous asshole nor the profitable company can be touched, and this is causing huge damage to our society.
The solution seems pretty simple. Anonymous speech should be the responsibility of the platform, and subject to civil action against the platform. Speech published by platforms which is not anonymous can and should be the responsibility of the individual making the claims.
This simple change would force platforms to moderate all their anonymous content, instead of the current, low cost statistical methods they are obviously using today.
Also, it does not mean that speech can’t be published anonymously. It just means that the service must know the details of the writer, and that the anonymity of a user can be uncloaked by a court.
Note that this is how “letters to the editor” worked for a very long time.
If you like your privacy, then don’t intentionally invade the privacy of others, or otherwise harm them, with your lies.
Many people argue that freedom of speech "is the freedom to speak without interference _from the government_".
I disagree with that, the freedom of speech is firstly a societal norm and secondly a government policy. And if I could choose only one of the two, I would much prefer it as a societal norm rather than a government policy, precisely because government is downstreams from societal norms. Ie. societal norms eventually become government policy.
In my view then, freedom of speech is much more a social phenomena than a governmental phenomena. Are you able to speak about the things that matter to you, without prosecution from any entity, governmental or not? The reason why freedom of speech is important, is because we want ideas that are counter to mainstream narratives to be able to set root in society. History tells us that societies need to adapt and change over time for various reasons. And freedom of speech is the engine of that change and adaptation.
> The reason why freedom of speech is important, is because we want ideas that are counter to mainstream narratives to be able to set root in society.
We want ideas that have value to take root in society, irregardless of their relationship to the mainstream. Not all ideas that are counter to mainstream narratives have positive societal value, nor do all ideas have equivalent value to all others. Alchemy does not have equivalent value to chemistry. The belief that the earth is flat is not equivalent to the belief that it is round, even if neither is perfectly accurate. The Nazis were certainly outside the mainstream, but were also wrong, and evil, and and having their ideas take root again now that they're no longer mainstream is never going to be a good idea.
Marjorie Taylor Greene (to pick an extreme topical example) has the right to believe in QAnon and Jewish space lasers, but everyone else has the right to call out her racist paranoid BS, and one force of speech must inevitably consequence the other. Freedom of speech that doesn't allow the freedom to discern quality of speech isn't really freedom, that's just having society act as a dumb terminal for whatever noise comes its way.
If society can not out-argue "The Nazis" in open debate, then what is going on?
If an idea is bad, the worst one can do is just try to censor it completely, that if anything let's people know that one is afraid of their ideas. Such things come back to bit oneself. (Using the term "one" here rather than "you" to avoid sounding accusatory)
Allowing others to discern the quality of speech is ofcourse part of the whole deal. The border being crossed here goes somewhere along the lines of the current cancel-culture trajectory.
> If society can not out-argue "The Nazis" in open debate, then what is going on?
What's going on is that the world isn't a debate club and the Nazis aren't interested in playing by the rules of the fair fight you want to give them:
“Never believe that anti-Semites are completely unaware of the absurdity
of their replies. They know that their remarks are frivolous, open to
challenge. But they are amusing themselves, for it is their adversary
who is obliged to use words responsibly, since he believes in words.
The anti-Semites have the right to play. They even like to play with
discourse for, by giving ridiculous reasons, they discredit the seriousness
of their interlocutors. They delight in acting in bad faith, since they
seek not to persuade by sound argument but to intimidate and disconcert.
If you press them too closely, they will abruptly fall silent, loftily
indicating by some phrase that the time for argument is past.”
― Jean-Paul Sartre
By trying to engage dishonest people in honest debate, you're only making a fool of yourself. Why? Because the playing field isn't level. Radicalization is much easier than deprogramming, and lies spread much more quickly than truth, because most human beings aren't swayed by logic and reason, but by emotion, ego and self-interest. There aren't enough of the rest to matter.
Take QAnon for example. That went from a meme on 4chan to a political cult powerful enough to affect national politics and fuel an attempted insurrection. Do you think no one has ever tried to explain the error of these people's ways to them? Did the years of having them "exposed to sunlight" on all social media platforms do anything but increase their spread and their toxicity?
Anti-vaxxers have been freely spreading their beliefs online for years. Has any of the debate or criticism actually stopped the movement, or even slowed it down?
Can you even name one example where a violent political movement or conspiracy theory was stopped through nothing but polite debate?
>If an idea is bad, the worst one can do is just try to censor it completely, that if anything let's people know that one is afraid of their ideas.
The implication is that people will believe one is afraid of their ideas because they might be true. There may be some people who think that way, but most people don't. And honestly, I was fine with these people being quarantined on Parler and Gab, and if they had been able to simply discuss their views without using their platform to put them into practice they would still be there. But speech doesn't exist in a vacuum, it exists in a context and Nazis intend to put their ideas into practice. Trump supporters and QAnons planned to "stop the steal" by kidnapping and killing elected officials. Anti-vaxxers plan to undermine America's vaccination attempts and destroy herd immunity.
So yes, I'd rather try to at least slow down their ability to recruit and radicalize people by not giving them carte blanche on the biggest social and political power multiplying platforms humanity has ever created. If that means some fools suddenly think anti-semitism and crystal healing are hip because they're forbidden, let them stay on /pol/ and make stupid memes.
>Such things come back to bit oneself. (Using the term "one" here rather than "you" to avoid sounding accusatory)
Such things bite even harder when left alone to gather strength in numbers.
>Allowing others to discern the quality of speech is ofcourse part of the whole deal. The border being crossed here goes somewhere along the lines of the current cancel-culture trajectory.
The people with a vested interest in spreading hatred and fear of "the left" have gone so far off the deep end with hyperbole and scare-mongering about "cancel culture" that they've rendered the term as meaningless as "socialist" and "SJW."
But, of all the arguments you could make against cancel culture, it getting used against Nazis, violent seditionists and cranks isn't the most convincing.
Well I suppose as long as you're on the good side of history it's ok to apply censorship to people who are on the bad side of history. Isn't that what the argumentation you put forth boils down to? I mean sure, ethically, if you know that bad things will happen if you allow free speech, then you are ethically obliged to not allow it. Not a very complex situation ethically.
But what happens when you are wrong about the bad things that would happen if you allowed free speech. Being right in some cases does not mean being right in all cases. And as far as I can see, it is not a clear cut ethical situation in that regard.
There is a clear line-drawing problem. Is all nationalism equal to nazism? Is all vaccine-skepticism anti-vaxx?
I totally agree that debate almost never actually changes the views of the one you are arguing with, iirc, there are studies that show the opposite is more common. (that they become more firm in their views) However, debates are seldom held in private, onlookers are far more likely to be subject to positive influence if they do not have a horse in that race. And I do believe that if debates are allowed, the majority of debates will be won by the positive influence, meaning the positive influence will be the dominant one in society.
And regarding the escalation of violence that "Trump supporters and QAnons" have been part of, what came first, the violence or the censorship? I believe that censorship often leads to violence, because what alternate conflict resolution channels do you leave available when debate is stifled?
Free speech is two things: 1) the legal protection which you already defined and 2) a guiding principle of democracy.
Cutting anonymous speech out would be just as dangerous. Same for demanding Twitter regulate "lies". If Twitter was around in the early 00s then they'd be deleting popular and dangerous conspiracy theories such as "there are no WMDs in Iraq".
There can only be one solution, which is to teach the average pleb better critical thinking.
I believe that twitter shouldn’t be legally forced to share whatever things someone wants to say on it, but I also think that it can be bad for twitter to choose to exclude certain positions.
Sometimes someone should do (or refrain from doing) something, but shouldn’t be made to do (or refrain from doing) that thing.
I think my major expectation for what claims a website allows users to make on that website, is that they make their policies for what they allow, not just technically stated in a ToS somewhere, but actually generally understood by viewers of the site. If someone wants to make a website which forbids any speaking in favor of putting orange juice on cereal, but allows lots of other things, then as long as what they forbid is clear, that’s not something I would think is bad. (Weird, yes, but not bad.)
Now, for handling libelous speech on a platform, I guess that is an issue maybe?
Honestly, “what about libel” seems like the most convincing argument I’ve heard for “websites should know who the users they automatically publish submissions from are”. But I’m still not convinced.
To be frank libel seems to be a hopelessly outdated legal argument of feuding nobles and constructs of "honor" to allow the strong to silence the weak. It is utterly impossible for it to be remotely consistently enforced even by the bottom of the barrel low standards of prohibition.
More cynically it is outright delusional to think that it will ever protect the weak from lies when its main purpose is to protect powerful people like Jimmy Saville from the truth about their crimes. It is a bad tool and should be tossed in the garbage.
The point is that these kinds of revenge campaigns have been easy for a long time (my blog post was in 2003) and keep getting easier. Using Pinterest in this case seems like an interesting twist, since Google's suspicious love for them is already a matter of some discussion. It's also interesting that an NYT reporter takes a so-predictable swipe at Section 230 as part of the story. While I wouldn't be in favor of holding platform providers responsible for users' content, I do think defamation deserves to be treated with at least the same seriousness as copyright violation, for which special provisions already exist.
P.S. I'm well aware that Stanek accuses me of being the defamer. Such reciprocal accusations are practically always a feature of these cases, including OP, and that should be kept in mind when considering them.