Hacker News new | past | comments | ask | show | jobs | submit login

When the user clicks on specially targeted ads, they reveal themselves to be part of the group and are individually identifiable.



To a first approximation, yes. There isn't any authentication around following an ad link, so technically anyone could click the ad.

But practically, no; nobody's going to randomly click an ad link someone else handed them for funsies. In the common case, the ad company will be able to do conversions on the target demo (and make strong assumptions about the target demo, even if Google doesn't explicitly leak that data).


I wonder if it'd make sense to have a plugin that'd share ad links in a p2p network just to screw with these people.


That's click-fraud, and Google at least attempts to identify it (so they're not charging for false ad views).

No idea if they share the "That click was fraud" signal with their clients for this kind of display ad.


Click fraud sounds like an ominous term. But let's be real here. We aren't truly opting into all this tracking. I'd rather call it anonymous clicks. As for Google trying to resist it, I strongly suspect there isn't a network actively trying to give privacy back to the people.

On a related tangent, CloudFlare creating a Google Analytics tracker could help lessen Google's tracking. But this battle should've been waged about 10 years ago. It's a shame that Hydra has many formidable heads.


What about AdNauseum? Same basic premise.


I think that calling ad-sharing "click-fraud" is a little ridiculous. I can understand why someone would call an extension like AdNauseam click-fraud, but this would presumably be people legitimately clicking on the ad.


I'm using the industry technical term, in the sense that the algorithms that could protect an advertiser from screwing up their metrics already exist to protect Google from screwing up the payment model.


Don't forget that Google allows some ad buyers to run their own JS inside the app. They could embed additional tracking and would get the hit without the user even clicking.


Not in a way they reveals demographics; that JS is sandboxed and can't generally figure out why the ad was run.

There's room for abuse, but Google will slap wrists of advertisers who try it.


You don't need to figure out why the ad was run if you target the ad to people who match the demographic you want to identify. It runs because it fits the match.

I don't know whether Google will slap any wrists. It went so far that their ads hijacked the browser and redirected to scam sites. I've been on the debugging end on a few such attacks over the years, and it just happened once every other year or so on sites with a healthy amount of traffic. The scammers do try to fly under the radar (only targeting specific sites, only targeting mobile and specific demographics to make it harder to track). If they simply didn't allow them the access, the problem wouldn't exist.


They allow the access because "good behaving" advertisers can use it to do animation and such without having to load high-bandwidth media files.

It is a feature of their display ads system that user privacy is protected and advertisers shouldn't be able to solve to a user when the ad is displayed (though they can, of course, begin a business arrangement with a user if the user clicks the ad; that's called a "conversion"). Scammers that abuse the JS layer fly under the radar because Google will ban their display ads account if they get caught abusing what the JS allows them to access (such as trying to data-scrape parameters off the user's machine to estimate whether they're seeing someone they've already seen).

"3 Ad Serving. (a) Customer will not provide Ads containing malware, spyware or any other malicious code or knowingly breach or circumvent any Program security measure." [https://support.google.com/adspolicy/answer/54818?hl=en]




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: