Hacker News new | past | comments | ask | show | jobs | submit login

It would be a good idea to present stronger evidence before saying HN is astroturfable. It has one of the strongest anti astroturf systems I’ve seen, primarily because of how HN works (which, sadly, is hard to discuss openly).

I believe you about Reddit, but it’s going to be quite hard to buy your way into HN, no matter how cleverly you do it.

I’m not saying it’s impossible. But it’s so easy to believe, and so hard to do, that it warrants skepticism.




HN / reddit / lobsters whatever programmer link aggregators may be astroturfed. But after a while you will learn to recognise the smell of astroturfing and even otherwise, they are refuted by commenters. More likely that there are fanboys that come with irrational arguments rather than astroturfing in these forums.

The greater harm of these content marketing bullshit is it pollutes Google search results. If you search Google for some mainstream enough technical terms, all you get is shallow information, poorly written, posts on bullshit sites.

I'd like someone to make a search engine for authentic programming related websites and blogs, even if hand aggregated. Instead of surfing through 5 pages of ZDNet, geeksforgeeks, DZone, thenewstack, quora etc.. highly SEO'd sites.


I've been pleasantly surprised by content om DZone at least, though the others less so.


It's a hit and miss site tbh.

I was once told that one of my rambly blog posts had appeared on there without me knowing; it wasn't the best, but still. My blog post was on our 'company' blog, turns out the marketing department of another segment of it just took it and reposted it on dzone.


I think that anti-astroturf systems are definitely one of the things that should qualify to remain as proprietary data of the people who run the servers.

Much the same as the best online payment processing anti-fraud services are an opaque black box that you feed some data into, and you get a result back. They don't tell you what's going on inside the black box.

I would not be surprised at all if the top vendors for online payment processing fraud detection also offer services for anti-sockpuppet/anti-inauthentic user detection. Some of the methods going on in the back end to analyze the validity of a transaction will also apply.

Considering the modern weaponization of social media to manipulate stocks, elections, protests and such, I would consider that sort of SaaS to be a growth market.


There is a problem though. In a physical setting you cannot do shadow banning. Now in the past 100% of public discussion happened in public spaces, but let's say that now this happens in 50% of the cases online. Let's also say that this grows considerably to something like 75%. The issue is that public discussion becomes increasingly easy to be censored.


Censoring isn't bad in and of it's own, so it would help if you illustrated this with perhaps some undesirable examples of things happening today.


HN hands out shadowbans like candy for people suspected of vote manipulation. I completely agree with this approach because it keeps the site clean.

An interesting question to ask is if HN owes it to its users to be more transparent about responses like shadow banning and provide ways to appeal such responses. Most of would say no, the current approach is working for us and we should keep it going. But then I wonder why we’re ok with HN behaving like this but not large social media companies.


I wonder how can shadowban work at all to begin with. It only takes 10 seconds to open a public thread in an incognito window and confirm if voting, commenting, etc indeed happened as the commenter intended or was it only happening in a private echo chamber.


One way I fixed this in a small gamedev forum I help maintain was by letting users view shadowbanned comments created by the same IP. There's still the chance of the user using Tor/VPNs, but it's rarer.

Shadowban is not perfect by all means, but it's still a good deterrent in my experience.


IP-based recognition is annoying for people leaving in third world countries[1] though, because there little IPV4, they are many behind the same NAT.

Also, with tethering it's really easy to circumvent, without needing a VPN.

[1]: IIRC, the whole Laos only has a /32 subnet… yes you read it right: a single IPV4 address for end entire country. And many country only have a few /16.


I was intrigued enough to look it up. According to[1], Laos has 54,784 addresses. The smallest is Santa Lucia, with a /24. North Korea and Dominica have a /22.

(Apologies if I'm getting that number wrong. I don't do much with subnetting.)

[1] https://en.wikipedia.org/wiki/List_of_countries_by_IPv4_addr...


Thanks for the fact-checking! I looks like I didn't recall correctly (or maybe it used to be true but changed at some point, who knows?).


Please read my message again. I'm not restricting anything and there's nothing to be circumvented, it's about letting unlogged/anonymous users view more stuff to deter detection of shadowbanning.


Maybe “circumvent” isn't adequate here (not a native English speaker), what I meant is that's easy to bypass your countermeasures: post from my computer, and check from my phone if my comment is visible, if not I shadowbanned.

And regarding third world country, your idea doesn't prevent them to access the website, but they will access a site where the shadowbanning feature is pretty much disabled, which could lead to the proliferation of trolls or spam targeted at this specific country.


If this ever happens I can just change my approach. It helped me a lot so far. I'd rather have an approach that is currently working on >99% of the cases I need than chase some hypothetical 100% solution that is virtually impossible to achieve.


This doesn't make any sense. Most IPv4 addresses change every day if not multiple times during the day. I guess you can only filter the dumbest of the dumbest this way. And if someone has the wit to open an incognito window it doesn't take a genius to notice that he can only see the same day's comments.


There's a lot more involved in my case than that, but suffice to say it worked well in the forum I maintain, so your intuition disagrees with my practical experience. And yes, my trolls/spammers are dumb.


There are two different classes of user. One class checks those things and knows that their post didn't go through. (That doesn't apply to voting, though—that's important.) A second class of user doesn't check and doesn't know. That second class tends to be more naive spammers or promoters, and shadowbanning works well in those cases.

It's useful with the more sophisticated class too, though. If they have to start fresh with new accounts, it slows them down and makes what they're doing more obvious to the community.


I guess it works well against non-malicious jerks. They come, they don't get the social reward they are expecting, and leave. And maybe later they will retry but with a better behavior.

And maybe the malicious/non-malicious ratio is low enough to make this method efficient.


How I initially joined HN: Made an account to point out an astroturf post I saw someone promoting elsewhere to upvote spam.

I was just a passive consumer before.


I don't know about astroturfing on HN, but I've seen enough patrolling here to be skeptical about any kind of “HN has strong defenses” claims.


What does 'patrolling' mean in this context?


I thought it was a common term, but maybe it's just some reddit slang: I use it to mean one community “attacking” or “boosting“ a submission on a forum based on the community values, and not the worth of the said submission. In can have different levels of synchronizations:

- no synchronization at all: conservatives (especially American ones) flagging socialist-sounding posts (often upvoted by Europeans when Americans are asleep), Gophers & C++ guys flagging Rust posts, etc.

- loosely synchronized: some content is getting popular on /r/rust, or /r/python, some people there will connect to HN to upvote it here.

- strongly synchronized: some influential Twitter handle posts a message about how “some shit went to the front page”, zealot followers come and flag the submission. Also works with specific subreddits (/r/programmingcirclejerk for instance, even though it's more aimed at comment posts than submission).

It happens a lot, often enough to be noticeable. Sometimes it sort of regulates itself (like in the left-right battle between Europe & US, or between the Rust Evangelist Strike Force & Rust haters), but not always.


Thank you, that's interesting! I think some of those things are going on. Other points I'm a little skeptical about—for example I don't believe that the left/right divide correlates as strongly with the US vs. Europe as you suggest. Many of the strongest leftist posts we see come from the U.S. and many of the strongest rightist posts come from Europe (to judge by IP geolocation).

Your 'no synchronization' case is tribalism. That's certainly happening here, as probably in every large-enough group. Yes, it's a significant problem. But it's not the astroturfing/manipulation problem being discussed in this thread. If your skepticism about "strong defenses" was meant to include this case, that's too general.

Your 'synchronized' cases would constitute abuse in HN's terms, and if you or anyone notice it happening in the future, we'd greatly appreciate being told about it at hn@ycombinator.com. Actually, if you can even point to cases where it happened in the past (e.g. "some content [was] getting popular on /r/rust, or /r/python, some people there [connected] to HN to upvote it here"), it would be interesting to look back and see whether we detected it and/or could do something differently.

The one thing I'd caution is that it's extremely easy to convince oneself that these things are happening when they're not. Nearly everyone with strong views about this phenomenon is massively deceiving themselves about it—if you're only guided by what feels like it must be happening, there's far too much opportunity to just project things into the situation. People do this all the time, and it's a big problem—as I've said elsewhere in this thread, it's actually a bigger problem than the abuse and manipulation being complained about. The solution is to guard against that by always looking for some extraneous indication (i.e. evidence)—for example a thread on Reddit saying "let's upvote this on HN"—and to be agnostic in the cases where one doesn't have that.


> that the left/right divide correlates as strongly with the US vs. Europe as you suggest.

Right, talking about “right and left” was a mistake because the meaning of these words are pretty fuzzy and highly context-dependent. I'd give a more precise description then:

Comments containing criticism of mainstream economics, references to Keynes, arguing that “all capitalism is crony capitalism” or “capitalism didn't defeat communism, welfare state did”, being in favor of strong state intervention etc. are going to be much more upvoted when Americans are asleep. And conversely for comments referencing Milton Friedman, praising the power of the market, economic growth as the main goal for social wellware, etc.

I've seen more than once my comments on the aforementioned themes being upvoted multiple times, then grayed several hours later, to end-up with a positive score the next day. I didn't notice the temporal correlation until someone brought it in a thread, and many people shared the same experience.

If you want to have a look, the recent thread on the Nobel prize in Economics smells like a good candidate for investigation (even though I didn't participate in the thread, so I have no evidences there).


I think HN is one of those rare site I really wish they had ad or some donation button where I can paid. Dang is doing an exceptional job in holding things together. And despite a slight downward trend in terms discussion quality, HN is still by far the best tech and other "nerd" interest forum. And the community as a whole self police each other. The vibrant and large number of active users, compared to a SubReddit, Dev.to and other site makes astroturfing HN really really really hard.

Anyone who have submitted anything on HN would know. Getting on the Front page isn't an easy task at all. And staying on front page is even harder.

And Cunningham's law doesn't always work on HN. Sometimes the community just decide to ignore it. Lol


It would be a good idea to present stronger evidence before saying HN is _not_ astroturfable. Yada yada. Just pretend I flipped the rest of what you said around too.


> which, sadly, is hard to discuss openly

Why is this?


Because discussing the moderation system is strongly discouraged and leads to downvotes.


No, that's not it; it's because anti-abuse is a cat-and-mouse game, so while you can discuss it (in an appropriate setting like a junky meta thread you didn't start), you can't expect HN to be forthcoming with details, because those details lower costs for abusers, and maximizing costs is the whole ballgame.

Kibitzing HN moderation itself is one of our oldest pastimes.


Its funny because the most hardcore in open source and security would argue that good techniques don't rely on obfuscation and secrets because those cats can get out of the bag. Never purrsonally subscribed to those as I agree with the cat and mouse perspective. Information assymetry is effective.


People in security who say that categorically are betraying ignorance, because there are several "hardcore" settings in software security where the same dynamic --- attacker/defender cost competition occurring by degrees --- plays out. Anti-ATO, content protection, botnets, anti-DDOS, hardware platform security, just to rattle a few off my head.

The correct security objection is to obfuscation being deployed in settings where there are decisively effective controls that could be deployed instead: where it doesn't make sense to raise attacker costs by degrees, because those costs can be raised to intractable levels instead. I'd cite an example, but it would spawn a 500 comment thread about how Linux sysadmins manage their networks.


I've never thought HN was impartial. The fact that discussing astroturfing is against the rules was highly suspect.

I consider this a highly censored website with particular objectives, but a decent userbase.


The rule is against insinuating astroturfing without evidence. The alternative is threads full of pointless insinuation about astroturfing—the favorite junk pastime of internet forums. HN doesn't have "particular objectives", it has one objective: to gratify intellectual curiosity, and that guideline is obviously integral to this.

That doesn't make actual astroturfing ok. We spend many hours combating it and banning accounts and sites that do it, including the ones that Troy's reporting on here. There's just a huge difference between it-really-happening and pointless-toxic-speculation. The difference is evidence, and that's what we require.

https://news.ycombinator.com/newsguidelines.html


Just curious, but what constitutes evidence in this context?

And how do you deal with the other side of "unfair" behaviour, e.g. excessive flagging or downvoting for legitimate posts or comments. As far as I'm aware there isn't any evidence required to downvote or flag.


By evidence I mean something in some data somewhere that's more than just the opinion being posted, which we can look at and evaluate objectively. I know that's a bit of a lame answer, but I can't give you specific examples without giving the same examples to others who would want to circumvent leaving evidence in that way.

The main thing to understand is that we need something to look at other than just an opinion that one commenter was expressing which another commenter didn't like. That's evidence only of difference-of-opinion, not abuse.

Such data isn't always secret and isn't always just on HN. For example, if someone is asking for HN upvotes on Twitter, we sometimes get links from eagle-eyed HNers. Similarly when someone is sending out spam emails trying to organize a voting ring. And sometimes spammers copy comments from other forums and paste them into HN. Those are pretty basic examples but I hope you can see that in each case there is some objective data that supports a judgment of abuse.

Conversely, suppose you like $BigCo and someone else hates $BigCo, sees your comment praising them, and replies "how much are they paying you, shill?" That's the kind of thing we don't allow, because there's literally nothing supporting that judgment. The same type of commenter will see various comment arguing for $BigCo in HN threads and then post to other threads with high confidence that "HN is overrun with astroturfing". What they mean is that it's overrun with comments they don't like—and even then, "overrun" is an exaggeration.


Thank you so much, dang. I'm completely with you 110% that just throwing accusations this way is bad, and that it's also most likely a "I couldn't disagree more" leading people to a wrong conclusion about shilling and such.

What about the other side of it though? your reply didn't really address it.

What I feel is happening now is that in those situations (and others), people downvote and flag things that they don't agree with. They're not shouting "shills / astroturfing" yet the collective power makes it easy to silence opposing opinions, especially if those opinions are in a minority.

Completely anecdotal, but I reported to you two cases of flagged stories that in my opinion had value in them for the community (and in the discussions around them). Those stories were effectively silenced. I think it's a shame. There's no evidence require to flag or downvote, there's no requirement to even give an argument/reasoning for doing it.

Are there any plans to tackle this kind of behaviour in a similar way that empty/non-evidence-based claims of astroturfing and shilling is dealt with?


Here's some logs + a writeup of when they spammed Lobsters on behalf of LoadMill: https://lobste.rs/s/utbyws/mitigating_content_marketing

My first name @push.cx if you want to share notes on these or other abusive users.


'Fairness' has no relevance when it comes to a site's participants deciding among themselves what is and is not worth discussing. Perhaps those posts and commands that were flagged or downvoted were considered less legitimate than you might have suspected by the rest of the userbase. <insert xkcd 1357 here>


IDK. Guidelines: https://news.ycombinator.com/newsguidelines.html

Be kind.

Please don't sneer

Please don't post shallow dismissals

But downvotes are sometimes used unkindly, dismissively and as a way to supress a different view (which may or may not be justified). You nuke me and say why, I'm happy - we can talk! I can learn something new! Downvoting factual posts silently is... frustrating. And ill mannered.


In aggregate downvoted posts are practically always low-effort (or sometimes just really unhinged or otherwise patently wrong), so I'm not convinced that downvotes being used unkindly is a big problem.


Downvoting is also restricted to accounts who exceed a point threshold. I think that particular feature (ensuring people who can downvote at least have some level of trust within the site) has been critical to prevent hive mind-style downvoting. I rarely see downvoted comments where I don't understand why they were downvoted.


perhaps ironic, but can you help me understand why the GP comment was downvoted on this thread? I’m genuinely wondering. (the one from throwaway_pdp09)

I’m definitely happy that there’s minimum Karma for downvotes, but how does it prevent hive-mind downvoting?


At a guess, it's a post that implies a sort of soft conspiracy with very little evidence. It just doesn't contribute a whole lot of value to the subject at hand, except for attempting to foment a vague sense of wrongness.


I'm the poster of that. Regarding evidence, I copied bits from the HN guidelines, said that downvotes are OK if I know why cos they bring benefit, then got silently downvoted. Is that not evidence enough? BTW I can't downvote myself. It was an honestly made critique and suffered from exactly what I protested against.

If I was wrong, your response does not elucidate why, in fact let me quote bits back to you "soft conspiracy" ... "very little evidence"[0] ... "a vague sense of wrongness"

Well maybe but your post has less substance than mine.

[0] you didn't ask for any BTW


That is not a good indicator. Your mind can always rationalize something.

Imagine if a large corporation bought ~50 old accounts and spoofed different computers/browsers. Only 5 votes is needed to hide a post.

Controlling the narrative is almost trivial if you can spend mere thousands of dollars.


But as I've pointed out before, how do I as a user of this site, get access to the very evidence I need?

There are numerous suspicious posts - which may just be my biases, or not - such as this thread with a guy posting a lot of facts https://news.ycombinator.com/item?id=24746397

I applaud this because we need facts, but one guy there has an astonishing level of facts ready to go and a rather slick and way of putting things which I recognise. Why? because I used to work in publicity (though not of the spinning kind). I recognise the style. I want the guy here and posting because we need facts not shouting but if he has a financial interest, we need to know. It should not stop him being there if there is because in some respects his pro-niclear posts are pretty good but it needs to be open.

Other problems - there's a certain style of posting that proposes stuff with zero facts and magically gets voted to the top of the thread. No facts, slight whiff of fud, pushed to the top. That's not actually how the HN crowd tends to react to info-free posts (or myabe there's a subset who does, I may be mistaken). But how do I analyse the voting patterns when I don't have the voting data?

I'll not mention what happens when china becomes the subject.

Is it me? I don't know. But then I can't tell without evidence. There seem to be other problems. Is it me? I dunno. I'm posting less here because I feel good stuff is getting swamped (not just my stuff, a lot of other people's stuff. My posts aren't generally a pinnacle).

Edit: so how do I get the evidence you require?


New submissions appear here: https://news.ycombinator.com/newest. Eventually, consider enabling "showdead" on the settings/profile page.


It has already been pointed out that discussing anti-astroturfing measures in detail is not done, for reasons that have been explained. It doesn't seem reasonable to keep demanding explanations given what has been said before.


"The difference is evidence, and that's what we require."

So how do I supply the required?


You can always contact the moderators in private if you're unsure, or do lots of research, like Troy did here.


> It would be a good idea to present stronger evidence before saying HN is astroturfable.

What? You do realize anyone can create HN accounts to post any link and comment on any discussion, right?

Even if you argue that there are magical ex post facto measures to tackle obvious and rampant abuse, you do understand that the system is indeed vulnerable to astroturfers right in its very design, don't you?


Anyone can create HN accounts to post any link and comment on any discussion, but certain accounts are dead on arrival, and votes from certain accounts don’t count, so you can pollute /new or discussions (especially for users with showdead on), but that doesn’t mean your content automatically get prime spot placement just by registering more accounts.

Meanwhile, clickbaiting is much more effective than creating accounts.


> but that doesn’t mean your content automatically get prime spot placement

That's not how it works. It might be a desirable goal, but that doesn't mean that the role of a content marketer is not to a) astroturf discusssions, b) generate content that's SEO-friendly even if they don't blow up.

Customers already get their money's worth if you get your minions sparking causal low-key discussions about their product/service/PR talking point on random places in order to raise awareness and focus on topics in ways that serves your best interests.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: