Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a simple solution to the AI spam issue (and spam in general) - a social trust graph at the heart of content aggregation and discovery. Associate all content with real humans and allow people to control their level of trust in other humans, then derive a trust score for content based on a weighted combination of your trust of the creator and the trust your friends put in the creator. When bad content shows up as trusted for a user and they "corrects" the system, that correction backpropagates trust penalties through the social graph. By allowing people to see when they lose trust, it creates a feedback loop that disincentivizes sharing/trusting garbage to preserve social trust.


That’s how PageRank effectively worked, and people created deep networks of pages voting for each other. To solve this problem for real you either need Sybil protection or to make it economically infeasible (impose a tax for making content available).

To some extent I have been wondering for a while if prioritizing Ads in Google Search is Google’s way of creating that economic barrier to spam content (for some meaning of spam) - you can take the fact that a brand is willing to spend money as some indication of “quality”.


Two points to note - first, if individual users have trust graphs rather than having a single global trust graph, this sort of gaming is basically impossible outside of exploits. Second, this behavior is detectable using clique detection and graph clustering, so if you're not limited by the constraints of a near real-time production system it's fairly straightforward to defeat it (or at least relegate it to unprofitability).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: