Hacker News new | past | comments | ask | show | jobs | submit login
The Never-Ending War on Fake Reviews (newyorker.com)
121 points by raleighm on May 31, 2018 | hide | past | favorite | 100 comments



Disclaimer: Not speaking for my employer, and my experience was several years ago and may be out of date.

My first project at Google was working on fake reviews on Maps. ML was certainly useful for some things, but at the end of the day sometimes you literally have no practical way of knowing if a review is fake or not. Was this written by a customer or by the business owner's brother in law? Or by his competitor down the street? Who knows? Which means it's pretty hard to get good, complete training data to train your model on.

Of course, there are classes of fake reviews that are easy enough to detect. But as the fake review-writing AIs get better, I don't know how the anti-fake review AIs can win.


> at the end of the day sometimes you literally have no practical way of knowing if a review is fake or not

Can't you check the account itself? If the account's behavior over time shows signs it's an actual human user instead of a bot.

This can't filter the owner's brother in law, but if bot accounts can be filtered then they cannot leave lots of reviews either, because the owner does not have hundreds of relatives.


Reminds me of player automation used to cheat in online video games. You'd write code to automate player movement and activity in order to progress in the game 24/7 rather then the couple hours a day an average player could do.

The strategy at many studios was to use a barrage of imperfect solutions which raised the effective per/unit cost of botting to something most players weren't willing to accept, then hunt the remaining "big-time" offenders using a separate set of tactics, like identity "fingerprinting", correlating billing information etc.


(I probably worked with one of GP's predecessors on his job.)

Account level signals are nice but insufficient. There's a very long tail of low activity accounts that are difficult to distinguish from a bot. You can't really classify them as bots without generating a ton of false positives, and in general you'd rather let a possibly fake review through than piss off a genuine first time reviewer.


Isn't that what shadowbanning is for? If the initial conclusion of the evaluation is that the user is a bot, then you can make the submitting user think the post was successful; but then you can hide it from everyone else until the result changes to one that the user is real.


Allowing reviews only from active accounts could at least alleviate the problem and in case of Google it can even check the account's activity via its various services.

A google account cannot really be low activity if its owner browses the web, watches youtube, etc. The tiny minority who logs out of google when browsing, etc. could not leave a review, but it may be a small price for more valid reviews.


I imagine Google has big advantage here, at least for brick and mortar places since they can use location data to determine whether the account connected to a device has physically been in a place. It would also explain why they always ask me to review/rate businesses 5 minutes after I walk in.


All the real-time location based stuff is after my time, but it definitely looks like an attempt to generate more known-good reviews. Possibly for better training data for modeling. I'm not sure that the historical location data is as useful as you might think - there's often a large temporal gap between the experience and the review writing.

Google does have a big advantage in verifying good accounts through other activity, but that advantage shrinks dramatically for new accounts. There's still some things you can do, but at the end of the day you're only willing to assign so much risk to a brand new account based on metadata.


Then the problem becomes: are you stifling reviews from someone who just created an account and has no behavior to be spoken of?


Since the problem of fake reviews is pretty serious I think it would be an acceptable compromise if fresh accounts could not write reviews.

The account would need a bit of history, e.g. one or two weeks, before allowing it to write reviews.


That would also risk turning off new users who create an account explicitly to leave a review.


Then allow them to write reviews like normal, but the review would actually be visible only for them in their trial period.

The user would see the review, others wouldn't, and after 1 or 2 weeks when the user activity confirms the user is not a bot, make the review visible for everyone.


Have you thought about what that looks like from the user's perspective? You'd just as likely turn them away for life with such an unpleasant experience.


The user would see his own review like today. The user wouldn't notice anything unusual. What would be the unpleasent experience?

Surely, you don't think that average users check their reviews from private browsing tabs to make sure it's there and stuff.


They intended to post a review. Not posting it but saying you did is straight up lying. Shadowbanning is morally dubious as a rarely used tool. It's pure evil as a default.

Do you not care about lying to them just because they are "average"?


Their review is only hidden for a while to make sure it's real. The current situation is dismal, it has to be fixed somehow. There is no perfect solution, because the completely open approach is abused by spammers.

It may not be the best solution, every solution has drawbacks, but do you have an idea for a better solution to filter out fake reviews? If so, please present it.


Not having a better solution than lying to your customers is not a reason to lie to your customers. It's just laziness bordering on sociopathy.


What if the user only has one review to write, warning others of their terrible experience? What if they don't necessarily write reviews in the future? I know I only write bad reviews, because I was annoyed enough at the situation to document it. You always hear negativity first on the internet, then praise..


Well, if a human can't tell, at some point, the AI can't tell either.

Since humans are good at creating lies, and that we all get fooled at some moment in our life, it means the problem hasn't been solved for millennial and that it's not an AI problem.

Sure you can throw at the AI way more data than we used to have, but even totalitarian state didn't manage to silenced the opposition so I doubt a business bound by law can do anything about it.

Google protection measures are already becoming super invasive and instead of helping me, has locked me out of my account several times because it detected I was a fraud.

There is a limit on what you can do properly here.


I'm starting to think that the solution is not to reduce fake reviews, but to increase verified accounts who give reviews.

I think in increasing the number of reviews from verified accounts (this account verified to be connected with human of name X), the rest of the anonymous or pseudonymous reviews may be looked at with more scrutiny.

Right now, let's assume that the default for social networks and other places of interaction on the internet is something like 10% verified accounts and 90% unverified. We know that within that 90% unverified, there are many real humans with their real names. So we spend a lot of effort to parse that 90% to find the real humans, the ones pretending to be others, the bots, the anonymous ones, and others. Companies are using ML, individuals like me just guess and try to only accept people that I've met in person and avoid platforms where bots seem to hang. However, with sites like Amazon, I have to hope that Amazon has filtered out the reviews from the fake accounts. Yes, there will be fake reviews from verified accounts, and that seems like another issue, and maybe less prevalent, I'm not sure.

If the percentage of verified accounts flips, from 10% verified / 90% unverified to 90% verified / 10% unverified, I think many of these fake reviews filter themselves out. We would trust that this specific person is saying this, and still leave room for pseudonyms and anonymity, but be more clear that one is using such measures to hide their identity.

How do you think this would impact fake reviews on Amazon? What secondary impacts do you see more verified accounts on Amazon and on other platforms having on discourse? I hope this post wasn't too long, again, I'm new to HN and hope I'm staying within bounds.


Even for verified purchases, Amazon's over-eager review beggary is just bonkers.

You buy something, and regardless of what it is, you get automated requests to review the purchase. Sometimes before the item has arrived. Practically always well before you have managed to establish whether the purchase was any good or not.

This is why I ignore pretty much every single please-review-me prod from Amazon and their retailers. Send me a request to review before I could have had proper time to evaluate the purchase in practice, and you get bucketed with other entitled f--kwits. You are also likely to lose my future business.

I believe Amazon could improve the S/N ratio of their reviews if they actually considered how long it takes to test out any of their purchases.


There are facebook groups where Amazon sellers will post an item. Then you purchase it and write a review, and seller will send paypal or an amazon gift card for the price of the item and maybe a few more bucks.

Hiring a few interns to sign up for all of these facebook group and track who is participating would go a long way. Doesn't seem like Amazon really cares much.


A few years ago during a business trip I ate at a relatively upscale restaurant with work colleagues the waitress offered our table a discount if we'd all leave positive reviews online. Seemed very dishonest to me.


I don't think it addresses the core issue and creates additional ones. There are people who have valuable opinions who won't post them if connected back to them. There are people who won't bother getting verified who would otherwise leave honest reviews.

The bad players will find a way to get verified and now you're worse off than before (fewer legit reviews plus bad ones with a fraudulent stamp of approval).

Whatever the solution you have to take into account that bad players have a higher incentive to pass your test / jump through hoops than legitimate people do.


I hear you on that. I think it may work better for other types of interactions but not this one on Amazon reviews, as you've said, there's probably a stronger incentive for people who want to game the system to jump through the hoops compared to ones who don't.


I think this would help to solve some of the quality and fake goods issues on Amazon. What it can't solve is a dishonorable seller switching out a good product for an inferior one. We should have 'verified sellers' as well tied to a real name.


Amazon and some of the other firms have enough data to know a subset of their users are real people without any additional effort. I have years of order history, with changing credit cards and addresses tied to my name, plus books and movie viewing over a long period. It'd never be worth the effort to fake that.


I actually wonder why crowd-sourced reviews have become so ubiquitous, to the point of almost being necessary (along with five-star ratings). Has anyone legitimately attempted to challenge the status quo here?

I'm reminded of videogamedunkey on YouTube and his video about game critics[0], where he goes into some detail about the preferences and integrity of the critic being just as important (if not more) than the review and rating.

Professional critics build a strong reputation around their taste and preference and they gain notoriety from doggedly sticking to those principles. So you'd likely trust the opinion of Roger Ebert or Mark Kermode if you share their taste in cinema, and even watch something you normally wouldn't if even they recommended it. You're very likely to subscribe to a publication whose critics align with your own tastes because they're effectively curating content for you in the form of recommendations.

None of that applies when you have reviews from a succession of total strangers - you're not going to research dozens or hundreds of commentators to establish a logical consistency in their point of view and decide whether or not their tastes align with yours. More often than not the reviews are low quality and low value, spread across a five-point scale but essentially treated as a binary like/didn't like system.

At that point, crowd-sourced reviews tell you nothing you didn't know already: some people enjoyed it for one reason and others didn't for a different reason. How do you know if those reasons are legit or authentic when they're not fake? How do you know if they weren't gamed or incentivised somehow to inflate expectations? Which ones can you trust?

I suppose they boost sales but just like advertising, that doesn't mean they're automatically beneficial to the consumer. It's just another attempt at manipulation.

[0] https://www.youtube.com/watch?v=lG2dXobAXLI


I think one of the biggest challenges is scale/breadth. For some things like Movies, we have lots of critics and there is demand for it. I work in web hosting reviews. Everyone technical has an opinion and non-technical too. The average HN reader and a non technical person starting their first blog are very different use cases and demands. I see way too often the 'why don't you just get a VPS and do it yourself it's cheaper' advice to basically anyone. Those are the kinds of people quite actively trying to 'help' newbies. And the reviews, well, it's not a product most people try a lot of different companies nor have any sense of how theirs might stack up. I've got to be one of the most experienced people in terms of number of companies used (I probably had accounts at ~50 companies this year), and I still wouldn't be comfortable writing reviews. It's too subjective and biased.


Metacritic does this for games, music, tv and film.

Aggregating what they regard as reliable reviewers is pretty reasonable.

For general products it's much harder. There is less of a market. But perhaps it'll come. Aggregate Ars Technica, Tom's Hardware and some other hardware review sites maybe?


I'm starting to despair of the possibility of ever having a reliable way to gauge product quality on the Internet. It seems to be a huge asymmetry problem where, as soon as a trustworthy system appears, the rewards of gaming it are so big compared to what consumers are willing to pay to have it stay trustworthy, that sellers will throw unlimited money at the problem until they have captured it with useless garbage.

About the only thing you can do is find a particular writer or site that has a reputation you trust, but they might never review the specific product you're interested in. And when you're trying to buy something in an entirely new category you're not familiar with, it's hopeless.


This is why it's important to have a reliable return process for online shops. Unfortunately, no online store that I know of has one, least of all Amazon, so I find myself buying more and more in regular stores and generally buying less and less often simply to reduce the risk of getting stuck with a bad purchase. It's ironic that the online shopping experience now has been topped by the brick and mortar store experience without brick and mortar doing anything significant to alter their experience in decades. Online stores are simply untrustworthy and I don't see this changing soon. From Amazon actively closing accounts they don't like to them and others failing to ship products repeatedly, the experience is now worse in every single way. And I'm not a big fan of driving to stores at all.


> This is why it's important to have a reliable return process for online shops. Unfortunately, no online store that I know of has one, least of all Amazon,

Huh? I have bought hundreds of items the past year for my small business and returned probably a dozen. I've never had any problem. I can even drop off the item at my local Kohl's and they package the return and ship it at no cost, even if it's my fault for the return.

Edit: I'm talking about Amazon.


Woozle's paradox of epistemic systems:

Because of a high percentage of the population being present, there is now substantial power to be had by influencing the discussions that take place.

https://plus.google.com/104092656004159577193/posts/RCyGi3HQ...

https://old.reddit.com/r/dredmorbius/comments/5wg0hp/when_ep...


Goodhart's law: A measure ceases to be useful as soon as it becomes an optimization target.

Concentrated benefit and diffuse cost is how a minority can maintain a globally sub-optimal status quo.


All we need is a web of trust that is easy to use and so many of these problems go away.


Then you run into the same issue gpg has: it is hard to get up and running.


As in OpenPGP? Ease of use has to extend beyond software. How much do you trust your friends? Your friend's friends? People in the same city/occupation/age/social class/gender/race/citizenship? PGP has one threshold for transitive trust: three trusted keys trusting another key is good enough for your key to also trust it.

In our capitalist society the easiest way for something to spread is a profit motive. Whoever spends the marketing resources to popularize it will have to recover their costs somehow. A lot of people will not pay for yet another subscription, and marketers will pay a lot to make their company look better.


Fake reviews can be seen as an instance of Goodhart's law, where the metric is the rating or score of the business. Initially those ratings may have high correlation with something real, let's say the "quality" of the business. But the more people rely on those scores and the reviews underlying them, the more incentive businesses have to game the system— which destroys the original correlation between ratings and quality.

A big part of the problem with review systems is the one-to-many nature of nearly all of them: when a person posts a review, that review and its score can be seen by everyone. This leverage makes it very efficient for businesses to game the system, as a small amount of fake information can "infect" the purchasing decisions of a large number of users.

So, one alternative might be a many-to-many review system where you only see reviews and ratings from your network of friends/follows (and maybe friends-of-friends, to increase coverage). So essentially Twitter, but with tools and UI that focus on reviews and ratings. That way, fake reviews could only affect a limited number of people, making the cost/benefit calculus much less attractive for would-be astroturfers and shills.


Doesn't this already violate a bunch of laws? (undisclosed advertising, wire fraud or something like it, etc) Just enforce them. Can't do much about overseas people but you can penalize the businesses that purchase fraudulent marketing. E.g. if doctors started getting medical licenses permanently revoked for buying fake "testimonials" I imagine it would end fairly quickly.


https://douglas-fraser.com/FakeReviews/index.html

I did my dissertation on this topic - text only analysis. I used a dataset that was commonly used in the beginning, but there are some issues with it. I plan to extend this to real reviews, as in 80 million Amazon ones (when I get the time).

Text based features are useful, but non-text based ones are even more so. Even spamming groups can be detected; at least there has been research into that. Combining all the techniques in an ensemble would be productive - but is it really in Amazon's interest? My sense is whatever they do, they pick the low hanging fruit and trying to process every review that comes in would require a lot of CPUs perhaps. But stuff like floods of reviews for new products that are fairly similar should be easy to detect. Perhaps they are relying on Fakespot and reviewmeta to do the heavy lifting.


curious if all the fake reviews will end up creating a market for certified reviewers or something like consumer reports. Sure we can't trust cnet or wired or pcmag but maybe if the incentive is high enough there will be a market for honest reviews including an incentive for auditing and verification?

could start a reviewer guild and use crypto signatures to verify their guild membership is up to date and still valid. hoping the guild has the incentive to stay honest etc.


> creating a market for certified reviewers or something like consumer reports.

I'd argue that is exactly what The Wirecutter is. https://thewirecutter.com/ https://en.wikipedia.org/wiki/Wirecutter_(website)

They solve the reviewer problem by having in house staff write the recommendations (reviews). Those staff then find and bring in experts. Virtually every page has a "Why you should trust us" section listing experience of those who contributed to the article. It does mean you get credible opinions, but also that you don't get the "wisdom of the masses" such as at Amazon.


They do sometimes reference the "wisdom of the masses" in their reviews as well, where they'll occasionally reference the overall ratings from various sources within the detailed portions of their reviews.

I'm generally a fan of the Wirecutter while remaining somewhat skeptical of the motives behind their reviews. My skepticism hasn't changed since NYTimes took over, but I do still visit the site when looking for a specific product. Nonetheless, they've made a good deal of affiliate $$ from me, and I haven't been severely disappointed yet.


Once they've done a page there is also often a vigourous comments section, and they do roll that feedback into future updates. My opinion of the site quality is much the same as yours. But at least they show their working and do update content regularly.


There was always a problem with misaligned incentives. If you're not paying for the review, they have no loyalty to the reader.

e-commerce websites pay lip service to deleting fake reviews but higher product review scores result in more sales. Even if this doesn't result in intentionally ignoring fake reviews in pursuit of short term sales growth, note how Amazon.com like all other five star rating systems suffers from massive score inflation. Most have a 4.5/5.0 rating. 4.0 indicate some potential problems (or less sophisticated customers), and 3.5 means it's sub-par.

A lot of online stores simply have no customer review section because they have rationally determined that for them decreased conversions due to bad reviews and moderation costs exceed increase in conversions due to good reviews.


> five star rating systems suffers from massive score inflation

Indeed.

I liked how Goodreads had their original rating scale done: 3 was good, 4 was very good, and 5 was truly exceptional. You weren't supposed to give 5 to more than maybe 2% of the books you'd read.

Then the got bought out (by Amazon) and the scale got soon enough devalued.

I still think that one potential solution to the scale inflation would be to consider the grade distribution a person uses. If all they give out is ones or fives, their reviews should have near zero weight. If they give out a more balanced (on a ~Gaussian curve) reviews, then their rare extremes should be weighted much higher.


Honestly just make it thumbs up or thumps down. Five star ratings are quite pointless.


I like Rotten Tomatoes's system. Individual critics give a thumbs up or down. A movie is "certified fresh" if at least 80% think it was worth watching. It doesn't mean the average rating was 80% or that that number means it was amazing or excellent, it just means the vast majority thought it was worth your time and money.


Your idea is along the lines of 'rater reliability' in the field of educational measurement/psychometrics.


I recently used a site called “Tabelog” while traveling in Japan - the median review is roughly a 3.2, a pleasant surprise that left me with a strong insight that a restuarant with a 4 or above was quite a quality establishment.


The guild could issue ReviewCoin and use this to pay reviewers, thus incentivizing them to work in good faith lest the public lose faith in the project, tanking the value of the coin.

ReviewCoin could be redeemed at participating online retailers for goods, thus allowing reviewers to buy more products to review.

This is a silly idea but it makes me smile.


Of course if the reviews you saw were only from people whose prior reviews you agreed with or who posted reviews which matched up with the reviews you yourself posted, then the system just wouldn't be game-able. But that's involve getting users actively involved in writing reviews as well as actively agreeing with the views of others. But when it comes down to it, until you have a review system that will tell me that the Super Bowl is garbage and not worth watching because it knows my tastes and doesn't give a shit about what other people that are nothing like me think, your review system is a piece of garbage. Of course it's gamed. It's trash, and it's not going to somehow be magically not-trash even if people don't actively seek to game it.


I think the bigger problem is reviews written by stiupid people. I seen a mosquito killer lamp that had very low reviews - when you show only one star, majority reviewers were complaining that the blue light lamp doesnt kill roaches and it inly attracts mosquitos, so it must be fake.


I prefer the term insufficiently savvy customers. Products that are not universally compatible or required specific actions to be taken tend to review poorer. Cheap items tend to review better as customers are more forgiving.

The customer ultimately determines their satisfaction level, even if they are complaining that a software product clearly described as Windows only is incompatible with their Mac. Or their "wireless" machine doesn't work without the power cord inserted.


I use fakespot.com regularly for Amazon purchases.


I think video reviews will become more and more important. Sure, people can lie on camera, but I think it’s harder to fake convincingly. And AIs/spammers can’t present on camera with the same ease that they throw up blog posts.


This reminds of an article I read many years ago about the Facebook's and Google's approaches to organizing information. The article argued that Facebook seeks to organize data in the context of human relationships. I wonder if the future of online reviewing will be one where people on trust reviews left by people they know. Facebook supports leaving reviews on company pages (AFAIK) but overall it seems like they haven't put much effort behind making a robust reviewing system. I can easily see them coming to dominate this market though.


This is the logic behind calling someone to verify who they are. Harder to fake with automated means, easy for people to verify if they know the person.

The potential downside is that people can sometimes become overly trusting of truer to life higher bandwidth mediums. e.g. Fake IRS, grandson needs money for bail and baby (cue crying noise).


> Sure, people can lie on camera, but I think it’s harder to fake convincingly

Well, at least until this is perfected (and it will be):

https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-new...


This area of the internet seems poised for a massive disruption as AI starts coming in. In fact I'd be a little surprised if it wasn't already a somewhat influential thing.

You could imagine a GAN-like ecosystem appearing, where you have loads of algorithms both writing and identifying fake reviews.

And with that, poor old humans like myself can surely not compete. You'd always have to have a healthy scepticism towards reviews. I already do, in an era where most of the fakes are probably (?) still human-authored.


If I were a scoundrel, I'd start a company that uses AI to generate reviews, and a company that uses AI to identify fake reviews, and hope no one noticed. But I'm not.


The only natural evolution of such a 'artificial reviews' arms race is actually an interesting one. It's a bot that is able to predict the review that would most accurately reflect your own tastes. It would be one that would only promote things to you that you would actually love.


The AI will make it worse. AI will be able to write reviews indistinguishable from humans by other AI's.


But we will create AI's to read the reviews on your phone and make a recommendation based on the probability that another AI wrote the reviews. Eventually a robot will be dispatched to the place under review, and will provide a ground truth. But of course the place under review will detect robotic reviewers and dispatch robotic service personnel to provide fake service to the robotic reviewers.


Who do you think is going to win in this struggle, the corporations that have a profit incentive to obscure the truth, or the individual consumers? Even if you try to market a “fake review defense AI” how are your consumers going to evaluate its effectiveness or ensure that you aren’t making deals with the AI reviewer companies behind their backs? It’s a completely lopsided fight.


I agree. A proper solution should probably work with the notion of a trust network, i.e. an extension of a social graph.


And hopefully not one owned by Facebook. Blogs with FB login commenting still have at least one "earn $$$$ working from home" comment and sometimes a flood of them.


I imagine that you can say that you don't trust Facebook, and then those reviews will be automatically excluded. Or perhaps that will automatically happen because all your friends don't trust Facebook.


Then you need to test who in your network has been compromised by fake reviews.

Its turtles all the way down.


https://reviewmeta.com/ is very useful to detect fake reviews on Amazon.


There is also https://www.fakespot.com/ . I wonder if fake review generators are already working around these fake review detectors or if the detectors are not utilized enough for it to be worth the effort. Either way, some sort of arms race will ensue leading to a Touring-complete trans-human post-life review AI extolling the virtues of some generic USB hub.


Bot detection can be trumped by paying people per review. Even pegging user accounts to a cellphone number has been circumvented by sock puppet farms buying SIM cards by the bag.


Has anybody considered that fake reviews might be part of the system, not part of a problem? Of course it's not in our interests as customers, but then again I have never paid anybody for a fair review, but I bet a lot of wealthy review targets are willing to pay a lot for a clean public appearance. Would a review collector or shopping site really be willing to not help with that?


I'm starting a new project for verified reviews: reviews that are "peer reviewed" by others in the community. I am still in stealth mode but I'm working to solve this: https://thepeerreview.com


If it's good and covers products of interest to me, I'd consider paying a moderate subscription fee for such a service. But only if that meant the site didn't have advertising, tracking or affiliate links.

The power of affiliate links to warp reviews is underestimated even on Wirecutter, imo. If they have to choose between a product that has no affiliate links and one that does, it's pretty much impossible for that not to eventually affect the recommendations.


I like this idea, but how to you suggest dealing with fake peers reviewing the fake reviews?


How do you deal with collusion rings?


ask dang!


One solution I just thought of (a bit scorched earth maybe) would be to stop allowing written reviews. Only allow video reviews and enforce showing a) the reviewers face b) the product and c) the proof of purchase (could be blurred afterwards for privacy). An automatically created transcript would be shown with the video.

This would make it more difficult/obvious if one person were to submit many reviews (use face recognition), raise the barrier for fake reviews, and give a lot more ‘signal’ to people trying to determine if a review is fake.

Of course, with Deep Fakes and such this could be bypassed still, but it could still have an impact.


The problem there is you pretty much get no reviews at that point. Almost nobody is going to go through all that trouble to leave a review of a product if it means having to film yourself doing so. Meanwhile, paid reviewers will happily do so.


This is still easy to get around.

1) Give a person a gift card to purchase product

2) They purchase product and review (following this procedure)

3) Pay them

4) Repeat


It's pretty common for clothing buyers on Taobao to post a photo of themselves wearing what they bought as a review. I'm not sure how the site or the sellers incentivise this, but it surely adds some trust.


One of my dream projects would be to create a movie review app where users have to upload a photo of their ticket stub in order to be able to review a film. Maybe also require a GPS tag.


Plot holes galore in your idea. Ticket stubs are different everywhere, impossible to verify, easily faked. Movies can be seen online or rented. I rate your idea zero stars.


> I rate your idea zero stars.

You might want to but you are forced to provide a one star rating instead. :)


Detecting fake news, reviews, astroturfing, fraud after the fact is too late, futile, confusing.

Correct answer is to have verifiable sources, citations.

What we used to call "journalism".


That's all well and lovely, but the reason reviews have always been popular is that in many domains people trust the man on the street more than they do a journalist or reviewer, for often perfectly valid reasons.

If a restaurant site removed all user reviews and replaced them with a food critic's opinions I would trust it less, not more, fake reviews notwithstanding.


I think what the poster of the comment (new to this, not sure what the correct acronym is) was aiming for is something akin to verified accounts. Not that the person is an expert in reviewing, just that I know the person who is replying is a real human and has name X.

I love getting reviews from locals and non-experts, I'm just really tired of having reviews, comments, and other interactions online with bots or people pretending to be other people.


Facebook page reviews is a good example of this.

~ Khayri R.R. Woulfe


Hey I don't mean to be a jerk, but please don't sign your name on every comment you write. Your username is right there, if I want to know who you are I can just look up a centimeter from your comment.


You're pointless and my signature is none of your business. You're a jerk and a troll as well. https://news.ycombinator.com/newsguidelines.html

~ Khayri R.R. Woulfe


> You're pointless and my signature is none of your business. You're a jerk and a troll as well.

And you have poor diplomacy skills. You lashed out, called someone pointless, a jerk, and a troll. This was directed toward someone who criticized you in a neutral tone.


And you have good diplomacy skills? He's clearly flaming pointlessly about my signature. Why was my signature a big deal in the first place? Just grow the eff up and stop minding too much about trivial things. You're wasting too much energy criticizing about non-issues. Just because you write it that you don't mean to be a jerk doesn't mean you do, it's just a way to bowdlerize your harshness and attempt to sound civil while actually being pointless in your criticism, which is otherwise called trolling. Calling a spade a spade or plainspeaking is in no way undiplomatic. I just do not sugarcoat it unlike you who try to be pointlessly hypercritical.

// Contacting the mods via email to take a look at this issue.

~ Khayri R.R. Woulfe


The reason why I pointed this out to you is because the site guidelines used to have a rule against signing comments, and I was operating on that assumption. I'm frankly surprised it's not in the current version, but you can see it here: https://web.archive.org/web/20160310014355/https://news.ycom...

Just CTRL + F for "sign."

More to the point, I'm sorry you feel like I was flaming/trolling you, I wasn't trying to do that. My memory of the guidelines is technically out of date, but in principle it still doesn't really make sense to me to sign your comments even if the explicit rule has been removed. If you had just done it once I wouldn't have said anything, but I looked at your comment history and noticed that you're relatively new to the community and have signed almost all of your comments.

I just figured I'd politely ask you not to do it since it is pretty redundant - your username is effectively being written twice for every comment you write, you're just adding the full last name explicitly. Sorry you felt attacked.


So it has been sustained that the criticism is pointless.

The excuse is also pretty lame. Judging by your profile, your account is new (less than 50 days old) and the Guideline you're referring to is wayback to 2016. There seems to be an inconsistency there. Whether you have older account or not, there is still a clear intention od flaming here. Using a two-year old archive of the guideline is pointless to justify your behavior. New accounts will naturally folllow the latest version of the Guidelines so again it is pointless to refer to old version of the Guidelines.

It is imprudent that you never tried to re-read the Guidelines since 2016, and that would be impossible either because any update to the site, it turned out, is properly published as a news in the front page.

So, again, the elements of trolling, flaming, pointlessness and dishonesty has been sustained.

I perceive this incident as an instance of how old users game the HN system by trolling new users using provocative behavior, virtue signalling and downvoting comments.

But I guess the problem lies in how HN fringe perceives "civility" and "diplomacy" which is at the level of a crude AI that doesn't get past beyond mere keyword bypasses and bowdlerizing techniques. Humans thinking and acting like machines.

~ Khayri R.R. Woulfe


The term itself is problematic - haven't we learned from "fake news" that said accurate description of literally whatever gets the most clicks will rapidly be turned into a rhetorical cudgel that means "news that I don't like"?

Given the sheer potential for abuse I would advise great caution with measures to remedy. This problem is ancient given that it has been around for literal centuries at least. Just in the article a contemporary of Oscar Wilde used the technique!


I don't think there's any comparison to be made. The term "fake news" was primarily popularized by a guy who was blatantly lying and who's grasp of the English language barely extends to 5 letter words and who doesn't even really bother with the grammar. "Fake reviews" are literally just fake reviews, it's not even really a technical term, it's just what they are.


I think it was more heavily popularized immediately after Wikileaks' DNC email leaks by the people who opposed the guy you're talking about. Then, the guy you're talking about adopted it as a counter-attack in order to devalue the term. His campaign turned the weapon back on its source. If you were paying attention to how the term "fake news" evolved, it started as a weapon for the left and then was adopted by the right. It's a great way to censor or discredit information that is inconvenient for you, regardless of your political leaning.

https://trends.google.com/trends/explore?date=2015-05-01%202...


Fake reviews are like spam and troublemakers at a comedy club. Nobody but the assholes themselves likes it. Clubs that don't throw out hecklers or fist fighters will fail when people go elsewhere that doesn't suck. There is a clear definition of non-genuine reviews: people who have never tried the product, they are being paid for the review, or you are the seller posting under a sock puppet account.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: