Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A spambot became the second most powerful member of an Italian social network (technologyreview.com)
130 points by avyfain on Aug 5, 2014 | hide | past | favorite | 38 comments


But the targeted recommendations given to followers were far more effective than those given to non-followers. “In other words, lajello has a greater persuasive power over those who are more aware of its presence and activity,” say Aiello and co.

This seems to be the main measure of the bot's "power", but I'm not sure it's that surprising. What they found is that people already predisposed to linking with random yahoos they don't know (e.g., the bot) are also inclined to link with more random yahoos they don't know. I'm not convinced that demonstrates the persuasive power of the bot so much as that some people will merrily click yes in response to any prompt.


Sadly this might be future of most of the social web, robots attracting robots.


I suspect there is an opportunity in working on the other side of this equation, providing a service to identify suspected spam accounts. You could either charge for access to the list, or publish it as a public service, in the same way that abusive IP addresses are published.

You could identify spam accounts in a few ways. A naiive, but probably effective approach, would be buying the "fake follower" services yourself on places like Fiverr [1]. But that would cost a lot of money quickly. A more sustainable solution would be applying any number of machine learning techniques, which have been shown to achieve 80-90% accuracy. apply any number of machine learning techniques with a fair accuracy. [2]

Personally, I would like to do some research into this area. I would assume you could get pretty far just by graphing the networks of accounts, and comparing the shape of them to authentic accounts.

[1] fiverr.com [2] http://scholar.google.com/scholar?as_ylo=2013&q=fake+twitter...


I thought this was common knowledge for social marketers, following everyone you can gains you followers.


The bot didn't follow people - it just visited their profiles.


Funny - I am just developing a Chrome extension that harnesses precisely this: https://chrome.google.com/webstore/detail/livisitor/pafjcmmf... Please don't share the link yet, it's still in beta.


There should be a browser extension that finds all references to social profiles on the page you are visiting, and that displays a warning if any of them is suspected to be a bot.


The social network cited by the article, Anobii (http://www.anobii.com), is like GoodReads but perhaps older (2006).


One of my biggest fears about technology is how quickly and tirelessly machines can do things that, until now, we've expected limitations on.

For example, if I exceed the speed limit on a highway for just 5 seconds, I doubt anyone would notice. In fact, the law enforcement is pretty vague about speed limits at the moment.

If I leave my car in a parking lot for a minute without paying, I rely on the fact that a human watching isn't going to be that super efficient as to give me a ticket right away.

But that's only the beginning. There is so much information that a computer network can cross correlate about everyone, including probabilistic assessments about their identity, etc. This can of course be used by businesses (http://www.forbes.com/fdc/welcome_mjx.shtml) or governments to try to do pre-crime (http://www.theverge.com/2014/2/19/5419854/the-minority-repor...).

But again, this is just the beginning. Right now we have a certain threshold for the quality of evidence against someone in a courtroom. With computers being able to come up with dozens of plausible "stories that will stick", anyone can be threatened as the jury (at least in the first few years) won't be able to tell the difference. We already have http://en.wikipedia.org/wiki/Parallel_construction providing admissible evidence in court. With big-data software more sophisticated than https://www.palantir.com/ , those with the machines on their side will have powerful legal weapons that they can wield against anyone.

But even this is just the beginning. We expect a certain quality of output from humans in all areas of life. Being able to match and surpass human output is one thing (http://www.popsci.com/article/technology/algorithm-recognize...) ... but the relentless ability to continually search through a space towards a goal, and instantaneously leverage gains to produce more gains, may begin to overcome any strategic human systems, whether by individuals or entire countries. A computer network could figure out how to infiltrate a social network, topple a regime, completely dilute everyone's trust even in one another, and so forth. The smarter computers get, the more they will be able to overcome the systems we've set up including the biological systems of morality and trust from hundreds of thousands of years of evolution.

In fact if you think this is fantasy, consider that the NSA and other agencies already have rudimentary versions of this, that can only be made more powerful with big data crunching and bots: https://www.techdirt.com/articles/20140224/17054826340/new-s...

So what's next for us as a species?


Agreed. Related to this, there's a school of thought which says the ability for us to break laws which don't yet reflect updated social mores is key to preventing society stagnating, to force change in those laws.

Let's suppose 1960s civil rights campaigners had had their protests and civil disobedience clamped down on with the ruthless efficiency of what you could imagine policing becoming in a few decades -- if the firehose-wielding cops had also been able to know with perfect intelligence who was planning to break the law, and had been able to easily crush any nascent rebellion against the laws of the time, which we now recognise to be wrong. How easily would we have seen change under those circumstances?


And you don't think the dissidents wouldn't use technology to circumvent the law enforcement bots? You don't seem to understand how brittle, rigid and flawed most/all of these algorithms are. With the right knowledge you can get around pretty much any technical system. Also, just a single person with that knowledge and the ability to implement it is a danger to the bloated, slow government(s) that might seek to use such a system.


By voting? Civil disobedience isn't the only path to changing laws; it just speeds it up sometimes.


Voting is playing by the rules of the system. The world is changing too fast now to use that a reliable strategy.


Civil disobedience doesn't do anything permanent, unless it results in changes in law.


The issue here is the rules of a human system are in danger of being subverted by people wielding massive computing power.


This doesn't work when the issue is voter suppression. Also, "tyranny of the majority" is a real issue in race relations.


This is a case study but it would be fascinating to see the results in other social networks too.


I believe the problem is that most social networks don't notify you when someone visits your profile. The only one I can think of that does that is LinkedIn, and there you have to pay for a premium account to have the information. Also not sure if their API allows you to pull that easily.


OKCupid has the option of notifying when someone visits your profile.


Related, albeit of a different nature, a pretty classic LinkedIn tactic now are connection requests seemingly from relatively attractive young women (at least when targeting men. A similar ruse is using age-correlated female from names when sending mass mailings -- instead of being from "Widget Co Promotions", it's from Brittney @ Widget Co).

Moments ago I followed one connection request I was sent, trying to find how I knew this person, to what was clearly a fake company with dozens of these "employees" -- all 20-ish, attractive women[1] with cloned profiles and fake backstories, to be shocked to find that many of them shared connections with me -- people I knew accepted the connection request because...attractive person. And this likely bot keeps crawling through the social graph building its master corpus of relationships.

To make it even more humorous, many of these fake profiles had various skills endorsed by seemingly real people. So not only did people connect with fake people on a picture, they felt the need to endorse them as well, earning their imaginary trust. Who knows...maybe it'll be the start of a crazy internet romance? Man and virtual bot.

[1] An interesting correlation would be determining someone's "type" from their online profiles, optimizing the success rate of the connection. Do they like the big haired blond? Maybe the petite Asian? How about the short-cropped strong-faced woman? A penchant for red heads?


> "people I knew accepted the connection request because...attractive person"

Or they just accept everyone. I did that for years.


I accept everyone, then quietly delete the annoying people / recruiters every couple of months.


But then the recruiters can mine your network in the interim, and start bugging your contacts. "I asked radicalbyte if he knew anyone who was a good fit for this role. He told me it was right up your alley."


I still do.


For sure, that plays a part as well, but for those who are more discerning I think this does add to to the probability of acceptance.

LinkedIn is a tool that reveals far more information than people imagine. I recently un-connected from some old peers because we are now effectively in competition with each other, and it felt dirty seeing connection notifications for them and knowing (later confirmed), companies they were targeting, strategies they were pursuing, etc. Not only did I not want to benefit from that, in reverse I realized the same could be seen from my activities, when clients I was soliciting would connect with me on our first engagement, etc.

In the LinkedIn world, Bud Fox wouldn't have had to follow Wildman around to determine that he was interested in Anacott Steel -- he could simply watch as his M&A team connected with the Anacott Steel executive, etc.


Last time I've checked you can set what LinkedIn will post about your activities e.g. you can probably disable announcing your new connections etc.


If you don't treat linked in as a public resume your doing it wrong. You should only put information there that anybody including competitors could look up quickly.


"doing it wrong" sums up how most people use most software.


if you don't treat linked in as a spammer you are doing it wrong


> not only did people connect with fake people on a picture, they felt the need to endorse them as well, earning their imaginary trust

LinkedIn shoves the 'endorse' feature in your face so heavily that it's almost more difficult to NOT endorse someone for something.


Oh heck, yeah. I have been endorsed for things I didn't even know I was an expert in.


I actually requested that some of my friends endorse me for "Sandwiches".

Partially to highlight the ridiculousness of endorsements, but also because I make really good sandwiches.


My friends endorsed me for some more obscene stuff, and I naturally responded in kind.

Of course, now that I'm looking for it, I can't find the interface in LinkedIn :/


When I had LinkedIn I got endorsements for Copy & Paste.


I accepted an endorsement for "Sarcasm" recently.


mnw21cam is an expert in widgets and spacecraft. Highly recommend.


I called out a very close friend for endorsing me for something that I definitely hadn't listed. He said LinkedIn suggested it and he assumed I must be proficient if I had listed it.


I never get messaged by these attractive bots. Striking out with the ladies once again. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: