Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google AdWords Exploit Seen in the Wild (josh.com)
461 points by luu on May 8, 2019 | hide | past | favorite | 154 comments


This is an explicit tool in adwords, believe it or not.

The feature is intended so that you can have a link "to" http://trackersRus.com/ which forwards to http://ebay.com/, without the user seeing that bit of ugly.

It's been used in campaigns for years, I've reported probably hundreds of these distributing malware.


It appears here that the redirection to the ebay.com destination url is not happening and that the user ends up on a different domain.

That kind of situation is usually detected when ads are entered into the Google Ads* platform for review, with ads then rejected for "destination url mismatch". One thing checked is that the final destination url after all redirects matches what is specified in the ad's final url field.

I suspect the scammers here are somehow faking the destination url for Google's bot checker to pass the Google checks and then serving different destination urls to users who they believe are not Google bots.

* Google Ads is now the correct branded name. No longer called AdWords as in the title.


Google's approach here seems totally wrong. The destination URL should be, exactly, the link as shown. If someone wants to track clicks using a third-party tracker, Google should offer an API for that which does not give the third-party tracker any ability to control the destination -- they have plenty of market power to impose this and, heck, they could even charge a small premium.

Most browsers support a lovely feature where the a tag has a ping attribute, which is intended for more or less this use case.


Google already works like this in browsers that support it (most modern ones). The ad is linked to the destination URL with no redirects through any advertiser-controlled domain. A third-party tracking URL can be specified, and it will be pinged in the background using the browser's sendBeacon() function. Any redirects in response to the ping don't affect what webpage the browser displays, so they can't be used to hijack the click.

https://support.google.com/google-ads/answer/7544674?hl=en


That’s not the point of the tool - the point of the tool is to turn example.com/cms/category/subcategory/product into the easier to read example.com/product


>That’s not the point of the tool - the point of the tool is to turn example.com/cms/category/subcategory/product into the easier to read example.com/product

Then set up an explicit 301 or 302 on example.com to make this happen, don't hide it in the ad-serving layer.


Wow it seems trivial to trick Google's bots with these links. Have the page redirect until ad is approved, profit?

I'm sure it's easy to find their bot IP's too. Just make a bunch of terrible ads that nobody will click and see who visits the url.

Google needs to abolish this link policy, I don't see how it's enforceable


This is called "cloaking", and it's a cat and mouse game between ad networks and bad actors. You're describing the simplest thing that can be done to cloak a website from an automated checker, but there's far more advanced techniques as well.


> Have the page redirect until ad is approved, profit?

Wouldn't work - they do periodic checks after approval. Something more sophisticated appears to be going on here.

>Google needs to abolish this link policy, I don't see how it's enforceable

Link analytics and link trackers are perfectly legitimate. There are many situations in which it is necessary or desirable to go via intermediate urls before the final destination. Throwing out the baby with the bathwater definitely isn't the answer here.


> Wouldn't work - they do periodic checks after approval. Something more sophisticated appears to be going on here.

What if you randomly redirect, say, 95% of clicks to eBay and take the remaining 5% to your phishing site? Each of Google's periodic checks would only have a 5% chance of catching you, but if you can get enough impressions over eBay's legitimate ads (which is an entirely separate facet to all of this), you'd still get a ton of bites, because so many people get to eBay the way Aunt Sue does.

Better yet, your redirect service could look at the client IP address and only redirect to the phishing site if it matches a known range for, say, Comcast or Charter. Or use it to drill down even farther and set up multiple spear phishing campaigns.

It seems like there's no shortage of ways to abuse this, and for Google to allow redirects without some sort of robust verification that the advertiser owns the destination domain (such as @gnud's certificate-based suggestion in a sibling comment) seems downright negligent, if that is indeed how they operate.


Perhaps letting the ad people have a free-for-all with tech is a bad idea. I feel like intermediate URLs should never be OK


There's other ways to signal ad impressions that aren't a huge security risk. Maybe not quite as convenient, but I doubt banning redirects would have a measurable effect as long as Google gave a deprecation warning.

You can achieve the same thing without redirects using URL parameters or the referer field. Google should ban any destination that doesn't match the sites domain. It's an unfixable security risk that's being actively exploited


I've had this problem on Facebook. I've reported some ads for various (relatively benign) scams for herbals and the like, that use a famous newspaper as 'their url', when they have nothing to do with it.

Facebook closed my report as 'not against ad policy'.

Anyway, this is actually easily fixed without losing tracking/campaign flexibility, by requiring ad orders to be signed by a certificate valid for the target domain, if the URL is different from the displayed one.


> Facebook closed my report as 'not against ad policy'.

Heh, makes you wonder, what's the ad policy? Sounds like: 'They pay us money, so must be legit?'


If Google enforced that hosts/domains matched, could you not redirect from your own host to the tracking provider (and them back to you)?


Yes, but most of the people buying ads are not technically competent enough to make this happen.

Google's solution ensures that the marketing people get what they want without the technical people standing in the way.


Wouldn’t a simple solution to this problem be to prove ownership of the domain you want displayed? Why is this not done yet, this is almost standard practice nowadays for many types of services.


A lot of companies send ads to amazon.com rather then their own web site.


Yep. This is why you never click on ads, period.


I wonder why Google doesn’t follow the redirect, and ensure the followed link matches the displayed link?

I get that there’s workarounds like changing the redirect after Google checks it, but there’s solutions to this too (like running checks every so often to ensure the link redirects to the same domain).


Possibly the checks are identifiable by User-agent, Referer, client address, timing etc.

For this purpose there's a lot of room for false positives. It doesn't matter if some actual users actually get redirected to ebay.


[flagged]


Online ad campaigns depend on redirects to reconcile clicks and analytics. It is dumb, but if you want to get customers/make money in the ad space, you have to support this.


Sure, but you could make them add a meta tag or upload a validation file to prove they are actually working on behalf of the final URL, just like they do to validate that you're in control of the URL for the webmaster console. If the malicious ad buyer has access to ebay.com's server all bets are off, but I feel like that happens a lot less often than this.


I posted this above but thousands of companies send their ad traffic to their amazon.com product page, and there are a ton of other one-off examples.


When I worked at Apple I filed a Radar (bug-report) asking for the mail client to check that, if the text of an <a> tag was a url, that the text matched the href field. What followed, on the Radar, was a lengthy debate about this. If I recall correctly, the people who opposed basically argued that, if this feature was implemented by the mail client, spammers would simply find another way to inject false links. We (those who wanted the feature) lost. But I still think that, any web app that shows "http://whatever" in a link field should ensure that the href field is "http://whatever".


Sounds reasonable only if the text is a URI handler format; http:// file:// ftp:// steam:// etc.

But then what about almost-uri text. www.yourbank.com without the https://. Or lookalikes "https:\\" or... what about proxies? does https://l33th4x.com?proxy=www.bank.com count if the text is www.bank.com?

Filtering crap like this sounds reasonable but very quickly becomes an exercise in what I call "Giving a mouse a cookie." Now you have a huge complex chunk of code to parse and filter URLs/URIs and every look-alike you can think of; Did you remember that automatic deserialization that kicked in when your values were sent to a callback?

2 days of work later, your new build has fancy-pants a tag filtering that contains and unknown number of bugs and phishers just register and use new domains that look kinda legit and follow your new text/link rules. www.security-wellsfargo-audit.com/login looks legit to the mark, your mail client allowed it so it must be OK.


Filtering crap like this sounds reasonable but very quickly becomes an exercise in what I call "Giving a mouse a cookie." Now you have a huge complex chunk of code to parse and filter URLs/URIs and every look-alike you can think of; Did you remember that automatic deserialization that kicked in when your values were sent to a callback?

This sounds an awful lot like how software development in general works...

Isn't all software just some version of "Giving a mouse a cookie."?


uh... ok... I'll bite... why do you call it "giving a mouse a cookie"


See https://en.wikipedia.org/wiki/If_You_Give_a_Mouse_a_Cookie

"If you give a mouse a cookie, he'll ask for a glass of milk"

"If you give a mouse a glass of milk, he'll ask for a straw"

And the story keeps going like that, with more and more requests coming in. It's almost like never-ending scope creep, except the book has an infinite loop in it.


There's also an animated series on Amazon now... my youngest loves it.



That filtering code should already be there in web app.


I'm agreeing with the "no" votes here. It's extra work and testing complexity for a one off case that's trivially avoided by spammers.

And where do you draw the line? Should it flag a link with text "htp://ebay.com" that goes somewhere else? "ebay" with a href somewhere else?

There's no technical workaround to educating users.


I'm partial to a simpler and complete solution: just always force display of href text on hyperlinks, ignoring the markup that's between <a> and </a>. Nothing good comes from displaying the text/image instead of the actual URL; at best it's used (usually overused) for extra aesthetic touch that's not otherwise useful, at worst it's used by advertisers and scammers to lie to people about the link's destination (tracking links and phishing).


I guarantee you that I can craft URLs users will click even if the whole URL is exposed. This "solution" makes developers feel better but provides essentially zero additional security.


Can you give an example? And will it never help?

I think it’s silly to argue against this. It’s like saying “computer security is hard so why bother at all.”

It’s a continued arms race, where you continue to make things harder and harder. This strategy is working the rate at which people are hacked on platforms like iOS is a fraction of what it used to be like for general computing. There will always be security holes, but you plug them as you find them just as you create mitigation’s against classes of problems to the best of your ability. Why make it easy for the attacker?


Never is a silly standard to measure against.

I'm clearly not saying don't do it at all, I'm saying that this approach won't succeed at anything other than making developers feel like they're Doing Something™. Actual spam filtering and 2fa are examples of real security, showing users the URL is an example of security theater.

The corollary is the nonsensical "security is hard so let's force non-technical users to do it".


Sandboxing to prevent malware installation and password managers to prevent phishing are excellent technical workarounds for this.


For the latter, as one of my sibling posts points out the user will go "Huh, stupid password manager didn't fill out my password - I'll have to do it manually". Often the password manager even helps them do this in the name of user convenience.

Only Brick wall UX works. That's what WebAuthn does here. Don't offer the user a way to "continue anyway", don't ask them confusing questions, just a brick wall and no way forward.

The user will probably be emotional. Scammers work hard to make your users afraid, or horny, or confused, and so they really, really want to give their bank credentials to https://honest-this-is-your-bank.not-a-scam.example/tmp/back... and nothing you can tell them is going to make them stop wanting to do that.

The brick wall doesn't care about the user's emotional state and will stubbornly resist. Maybe the user will eventually realise it was a scam, maybe they won't, the brick wall doesn't know or care either way.

Brick Wall UX even helps the software engineer. When a manager asks if you can't just add a banner that says "Hi, this is our new web site, please continue to use your old credentials" and thus undo every second of training about phishing your users ever got the answer with a Brick Wall UX is that literally can't work. No matter how much they beg and cajole and swear it's just a temporary workaround, it will not work at all, so they're just going to have to go tell the Big Boss that no matter how much was spent on new-brand-name.example the login system will have to remain forever on login3.long-forgotten-brand.example because that's what some idiot picked five years ago then it was set up and too bad.


Can you elaborate on what "brick wall UX" is like to use as a user, or how one implements it? Are there known examples you could point me at? (It's a term I haven't heard before, and haven't noticed anything when searching.)


I don't think it's a known term; it seems something GP created. As for how it works, the concept is simple: if the user wants to do the wrong or potentially insecure thing, just don't let them, period. That's the brick wall.

I'm of two minds about this personally. On the one hand, I appreciate the argument that the only thing that can prevent businesses from doing something bad, stupid or abusive is if it's legally, physically or by design impossible. On the other hand, as a pro user, I do appreciate the ability to override software when it mistakenly tries to prevent me from doing something.


Unless the developer has enough foresight to account for, and sufficiently handle, every single instance where that brick wall would stop a legitimate false positive, I stand vehemently against them.

There are enough examples in the past that clearly demonstrate that developers are not benevolent or competent enough to have complete and final control over the software their users run. Sometimes this control even results in the exact opposite of what the developers originally intended, as was the case with firefox addons just a couple of days ago.

Ultimate control over software should always reside in the hands of the user.


> as a pro user, I do appreciate the ability to override software when it mistakenly tries to prevent me from doing something.

This is really the core problem in security; the world is designed for people like this by people like this, without any serious thought for the implications for the overwhelming majority of users. Do not include "I know what I'm doing" escape hatches, and security will magically get better for the many at the expense of convenience for the few.


Sure. That's why I'm not too big a fan of security. The flip side of your observation is this: the most secure form of computing is a rock. You can ensure users can't be pwnd and can't selfpwn themselves by making the device as useless as possible.

Let's give every user a tablet that has two buttons. You press one, you get a new cat picture. Press the other to "like it". That's all the user needs. All data exchange is end-to-end encrypted from the cat picture provider to tablet's input&video drivers - can't risk the spooks^Wcompetition knowing what they're looking at. They don't need to do banking - like everywhere else, they just sign a three-party contract with the tablet provider and the bank. This way, the Bad Guys can't steal users' money! Oh, the users also want to watch pictures of squirrels? There's a separate tablet for that pulling from separate provider; it's insecure to let these mix on one device!

Seriously, this is how the world would look like if security got its wish. There is a point past which security is essentially enslavement, and that's true both in physical security and computer security.


I gave the best example around today already: WebAuthn. Its ancestor U2F has the same behaviour.

The credentials in these protocols depend explicitly on the verified FQDN of the server (and thus you can only use this with HTTPS). When scammer.example asks for your credentials there literally isn't a way to give it credentials for realbank.example. No matter how sure you are that you're a very smart person and definitely need to give scammer.example access to empty your bank accounts, no way to do this is available. Maybe next week you'll still be angry you couldn't do this, maybe you'll realise it was a scam, don't care.

This is why Google reports zero successful phishing for their own systems. U2F is mandatory there. Their employees aren't magical, some will fall for scammer.example and they will be really frustrated that they can't use their Google login like it says, and some will scream at their help desk team about how stupid this is, how it's totally broken, and even after they demand that the help desk person be fired and they change their password six times and write a ten page rant on their blog they still can't give their employee credentials to the scammer and Google remains safe.


Except they're not, because a dumb enough user isn't going to think about their password manager, and they'll enter their password anyway.


They won’t even know what it is.


I also think mail clients should do that, or at least offer it as an option. Most mail readers have an option to disable loading remote content, and this would be another little way to make HTML email less dangerous.

I think (or at least hope) that most people are in the habit of hovering over links in email before clicking them. And I really hope that mail readers never start implementing Javascript. As for web apps, that's the Wild West, and a small fix like this isn't going to tame it.


Maybe, quite simple have the option to parse <a> tags down to the format "Text - URL". If I remember correctly, some sites from way back had this sort of format occasionally.


WordPress does it in the admin interface. A very clever hack. Something like:

    a:after {
        content: attr(href);
        display: inline-block;
        padding: 0 1ex;
    }


You give way too much credit to users if you think they hover over links and users can’t hover over links on mobile devices.


At least Thunderbird seems to do that: when an email has an <a> tag with text that looks like a URL but doesn't match the href, it throws the "this email is probably a scam" bar above the message.


...which then marks all these newsletters as scams, since the link usually first points to analytics site?


Sounds to me it’s working as intended.


Sure but regular users will see it and think "this message means nothing, I got an email I know was from my bank the other day and I saw the banner there, too!"


They should just collect stats on their own servers.


This might come as a surprise to you, but every company doesn't have infinite developer time to reinvent things which already exist.


Then they should be held accountable for their choice not to vet ads they send to their users. If this was enforced legally, a bunch of companies would suddenly be able to invest the "infinite time" required to register a damn click.


That's obviously not the hard part.


"The hard part" can still be outsourced to a third party without resorting to redirection chains, either by sending the click information to them on the server-side or by sending it client side using a script.

Either way, "the hard part" is generally undesirable to users because it compromises their privacy in order to manipulate them.


The real answer here is that tech companies are trying to solve this problem with tech - in an effort to cheap out on actually hiring some humans to look at the thing and verify that it's safe.

My eyeballs are not free. I hate advertising and advertisers. I have no pity for the advertising platform that cheaps out on security just because it's expensive.


> the mail client to check that, if the text of an <a> tag was a url, that the text matched the href field.

The use case that this breaks is doing click tracking on links using redirects from a unique url to the actual url (which would be the url displayed in the link text).

To avoid breaking this use case, the best remedy would be to prompt the user with a security warning upon clicking a mismatched link prompting them the verify the url in the url bar. The issue, is that doing this selectively teaches the wrong security practice to users: that they can improve their safety by looking at the link text rather than at the url bar after clicking the link.


> The use case that this breaks is doing click tracking on links

I, personally, would be quite happy for this use case to break.


> I, personally, would be quite happy for this use case to break.

Why? If you don't want to be tracked it is pretty easy to avoid. You should already only be getting/opening emails you care about. Emails you don't care about should be unsubscribed from and reported as spam. Granted that links should only be tracked in email you do care about, why do you not want those people to have the information they need to refine and improve these emails so they can better serve and inform you?

I would assume that happiness depends on how this use case is broken.

Would you be happy if the email just doesn't show up or gets shunted to spam? Even if it is a password reset email or a email verification email?

Would you be happy if the link just failed to open, forcing you to copy and paste the link text manually? Why not just do that on your own anyway? No need to have the email client block this for everyone just to suite your tastes.

This leaves us with just the behavior I mentioned above and my argument against it.


> why do you not want those people to have the information they need to refine and improve these emails

Well, it comes at the expense of 1) making things slower for me and 2) making it more difficult to discern phishing emails from legitimate ones. I also find it difficult to believe that all of this analytics is actually doing much to inform me about things I care about.

> Would you be happy if the email just doesn't show up or gets shunted to spam?

It's unfortuate, but I would understand it if it happend. I certainly hope most phishing emails would end up in my spam.


> 1) making things slower for me and

That is a valid reason, but I suspect the extra delay of 2x your ping when you click on a link and wait for it's target to load is fairly negligible for most people.

> 2) making it more difficult to discern phishing emails from legitimate ones

You shouldn't be relying on link text to discern phishing emails, that is what the client checking SPF records and the user checking the contents of the url bar are for.

> I also find it difficult to believe that all of this analytics is actually doing much to inform me about things I care about.

Why is that? I would think it is pretty obvious how A/B testing click-through rates for emails could easily help make those emails more informative and easier to use.

If the speed cost and privacy loss is not worth it to you, having the actual (non-tracked) URL available in the link text atleast gives users the option to opt out of that tracking.


> Why? If you don't want to be tracked it is pretty easy to avoid.

If you want to track clicks, why not use a subdomain instead of something.weird3rdparty.com? And why would the link text look like a URL? I don't see why

  <a href="https://possiblephishing.com/n1bfsd?j1h7d8Sda">www.ebay.com/viewOrder</a>
makes more sense than

  <a href="https://www.ebay.com/viewOrder">view your order</a>
.

> Even if it is a password reset email or a email verification email?

Why would you want to share password reset info with a third party? I get the "easy tracking" argument for newsletters (but I'm pretty sure most are not GDPR compliant), but for password reset emails? why?

> Would you be happy if the link just failed to open

Why would it? You can put a working link in the email directly - you've proven that by putting a different link in the email than the one you pretend you're linking to.


> If you want to track clicks, why not use a subdomain instead of something.weird3rdparty.com?

I don't see what relevance a subdomain has here. As the original issue was described, it was blocking any links where the url doesn't match the link text (if the link text is a url). This means that even tracking that is done on the same domain (e.g. example.com/emailTracking/{{unique_string}} redirects to example.com/viewOrder/412) would be blocked.

If you start only blocking those links if the domains don't match, things become more complicated to implement and test, especially once you consider subdomains, etc. The question remains, what is the point of this and is it worth it? Are we really promoting good security practices or just adding security theater cruft?

> Why would you want to share password reset info with a third party?

Who said anything about a third party? As you mention above, this blocks people who are doing their own tracking on the same domain. I don't have any numbers, but I assume that ESPs like MailGun and SendGrid are probably the most common third parties used to track email link clicks (since they will automatically substitute link urls with trackable rediracts if you enable it.) In this case, that third party already has access to the entire content and metadata of the email and giving them link click data is a relatively small addition.

> Why would it? You can put a working link in the email directly - you've proven that by putting a different link in the email than the one you pretend you're linking to.

Email links already work one way, perhaps if Apple implemented this feature eventually every email sender out there would switch to links that would work in Apple mail again. In the mean time, Apple has to deal with confused and dissatisfied users wondering why links in their emails don't work.

How would you implement link url / text mismatch blocking? Have you thought through the consequences for your users and their understanding of security or satisfaction with your product?


> Are we really promoting good security practices or just adding security theater cruft?

We're taking away easy attacks, just like blocking known spammers from delivering emails to your MTA. Yes, it doesn't solve spam completely, but it reduces the amount. Add other things like SPF, DKIM etc and you can identify even more malicious emails and warn the user. And certainly, the OP's description mentioned matching urls (though the text would have to match the href, so "ebay.com" would likely be fine for "https://ebay.com/something"), but again: what's the harm in linking to the actual URL? Why the need to hide it, IF you already make the link text look like a URL? I'm sure there are also reasons why a site operator wants to avoid SSL, but I do like the fact that many browsers do warn users when (potentially) sensitive data gets transmitted via plain text.

> Who said anything about a third party?

The super majority of click-tracking runs via third parties. And yes, every additional detail you share with them (PII has gotten a lot of attention lately, and I'm beginning to come around and love GDPR) will put your users more at risk - now it's not just your database that risks exposing your users when a leak/breach happens, it's also your email provider's.

> How would you implement link url / text mismatch blocking? Have you thought through the consequences for your users and their understanding of security or satisfaction with your product?

I'm sure that it's not trivial, but few things are, so rejecting it for that reason doesn't sound like a good idea to me. "Hey, let's not do SSL, it's not that simple to build and might inconvenience a user that has their clock set 100 years into the future"


> We're taking away easy attacks

What attacks are blocked by this that are not also blocked by using SPF and warning users when SPF is not present or doesn't match?

> what's the harm in linking to the actual URL? Why the need to hide it, IF you already make the link text look like a URL?

This is practically irrelevant. We are discussing the costs/benefits of the implementation of a client side feature, not the ideal form that all emails sent should adhere to (which is a far broader and more complicated topic.)

However, I do have several practical reasons why the link text and url matching can have negative consequences in practice:

Third party tracking issues:

1) Practical difficulty: Most link click tracking is done via third parties (usually the sender's ESP). No click tracking tool I've seen offered by an ESP provides any way to use the link url as the link text (since the link url is usually processed and rewritten by the ESP after the email is composed and sent.)

2) Confused users: The average user has no idea what an ESP is and could be needlessly confused / frightened when shown links so some random domain even when the operator of that domain is trusted by the sender.

First party tracking issues:

3) Removal of user choice: If you only give the track-able url in both link target and text, you FORCE your users to be tracked, rather than giving them the option to select the link text and paste it directly into their browser.

4) User experience: The url that is shown in the link text can be much more informative as to what it does (e.g. shows you your order) rather than an uninformative, generic click tracking link.

> The super majority of click-tracking runs via third parties.

Can you find me some email link tracking services that are not also involved in sending that email in the first place?

You do expose additional information about the user's clicks and IP address. If this is something your company is concerned with, you probably need to implement first-party click tracking both for email AND your website. You should probably also run your own ESP.

If you are trying to promote the use of first-party click tracking (or just discourage third-party click tracking), it would be far better to block all links that go to domains that don't match the sender's domain (or alternately that don't match the sender's domain's SPF record.)

> I'm sure that it's not trivial, but few things are, so rejecting it for that reason doesn't sound like a good idea to me.

I am not rejecting it because it's not trivial. I am rejecting it because thinking through how that implementation would realistically work makes me thing that it would accomplish almost nothing and possibly even negatively impact users' understanding of security.

You seem to think that is not true, so I am asking how this feature can be implemented in way that has a positive impact on security.

My biggest concern is that we shouldn't do anything to teach people that sometimes they CAN trust the link text rather than needing to check the actual URL they end up at. As far as I can think, all the attacks that this stops are better stopped by checking SPF and strongly warning users when it is not present or does not match.


> My biggest concern is that we shouldn't do anything to teach people that sometimes they CAN trust the link text rather than needing to check the actual URL they end up at.

I don't believe that "copy link text to avoid tracking" is a relevant part here, so I still don't understand why you'd want to give the impression of a text-link with a different URL. I see lots of malicious reasons, but I don't see valid ones where there is a strong case that this is a necessity. Why should we teach users that "don't trust your lying eyes, just click on whatever" is ever a good idea? Why shouldn't we teach them to not touch something that is trying to deceive them? If we teach them to ignore these things, we're making it easier for scammers.

Sure, blocking links to domains that don't match the sender's may be something as well, but I do see lots of cases where that's totally normal, i.e. me sending you an email saying "hey, I read your blog entry, and this site here does what you want". Mind you, that's just a link, it's not a link that is trying to confuse you about it's true target.

> You do expose additional information about the user's clicks and IP address. If this is something your company is concerned with

If any company isn't concerned with that, they either don't do business in Europe or they should talk to a lawyer. ;)

> If you are trying to promote the use of first-party click tracking (or just discourage third-party click tracking)

Neither is my intent, I just don't see valid reasons to pretend you're linking to one URL when you're linking to another when there's an easy alternative: put words into the linktext, not URLs.

It's like FB's idea to pressure users into giving them their passwords for their email account. Terrible idea, no valid business case ("it's easy and we can really check that they are the owner of that email account" isn't valid), but lots of reasons for malicious actors, so somebody telling you "give me your email password" is a warning sign for everybody. Trying to confuse users about what URL you're linking to is as well.


> why do you not want those people to have the information they need to refine and improve these emails so they can better serve and inform you?

Email A/B testing is always about marketing and trying to manipulate users, in my experience. If the content is the same, but a different button gets more clicks, it's hard for me to believe users are finding a benefit in that. I think it's far more likely that the content isn't all that compelling, but the UI tweaks managed to tickle some part of the reader's subconscious in the right way for them to click.

In other words, emails with quality content don't have marketing teams running them.

This is pretty much like all targeted advertising. The benefit to the company is very clear, but I don't think I'd be missing much if email analytics went away.

Just as a personal opinion that I'm sure plenty of people disagree with, I dislike it when products are hyper-customized based on a lot of analytics. For instance, Netflix has a ton of data on me, but all of their tests and micro optimizations make me use the app less. I would much prefer their product if they threw all of that info in the trash, rolled the UI back 5 years, and showed me more generalized start ratings again.


Yeah but that's annoying and almost every one who isn't "techy" for lack of a better term won't even bother reading it and just press ok.

I mean can you honestly tell you read the Google privacy notice or do you just click the down arrow till the ok box appears?


> Yeah but that's annoying

Yes it is annoying. My point is if you want to be annoying, you should be annoying the user when they visit ANY external links as trusting the link text isn't a practice that should taught/encouraged.

> almost every one who isn't "techy" for lack of a better term won't even bother reading it and just press ok.

That depends on how short and well worded the alert is. However, there is certainly an attention budget that can be used up with pointless / low value alerts (which is why I would assume that Apple did not go this route).


Email marketers would just stop having text in <a> tags that dont look like URLs to get around it. They already do that anyways for the most part.

Phishing attempts will mimic real emails, so they will do the same.


I've reported this back in 2017: https://news.ycombinator.com/item?id=13413399 (Though the screenshot was on G+, so RIP.)

It's even been done to youtube.com before! Clicking ads is inherently dangerous, as they are allowed to show URLs which do not reflect the URLs they will actually route you to. You should never click on an ad.

This is a scenario that violates any reasonable convention of good web behavior, but Google won't fix it because the advertisers are how their bills get paid.


Isn't this true about any link?


No. If you hover over a link in your web browser, regardless of what it says on the link, the hover text (often appearing at the bottom left of your web browser) should show you the real, full destination URL. Try hovering over any link in HN, and you'll see the URL you're going to actually go to when you click on it.

However, when you hover over a Google Ads link, it does not do this. It shows you a friendly URL for the destination (such as https://www.ebay.com) but when you click on it, you get redirected with a bunch of tracking stuff added or even through a URL not on the ebay.com domain, as shown in this "exploit". In fact, even if an advertiser were to use a "clean" link as the destination, you first get redirected on that click through a google.com URL, even though the hover text is still lying about the destination.

I'm not even sure what it's doing here, there's some neat JavaScript in play. The hover text shows the "clean" URL, but if I inspect it, and then hover over it again, it shows the real redirect URL through google.com.


They detect a left-click and in the click handler they replace the href, so it's impossible to see before navigating.

Pretty evil huh?


You’re wrong about being able to trust standard anchor links- you can’t. You can intercept the click using an onclick handler and redirect the user to wherever you want.

Viglink and Skimlink do this for affiliate programs, which is somewhat legitimate.


We posted at the same time, you're right. Disabling js fixes this (and so many other problems)

js websites cannot be trusted


Google has been doing this on google.com search page for ages before "upgrading" to ping

I used to run

   if((event.target.tagName.toLowerCase() === 'a') && (event.target.childNodes[0]) && (/^(http?:\/\/(www\.|encrypted\.)?google\.[^\/]*)?\/?url/.test(event.target.href))) {
      var matches = /[\?&](url)=(.+?)&/.exec(event.target.href);
      if (matches != null) {
        event.target.href = unescape(matches[2]);
      }
  }
nowadays if (link.ping) link.ping = null; is enough


browser.send_pings = false

on Firefox


Hacker life! NEVER click an ad


Are there trademark infringement issues here, particular on Google's part? They are getting paid (probably a lot) to display this ad, and are explicitly allowing buyers to lie about their identity.

If I were eBay, I'd be getting my lawyers on this immediately. Every dollar getting paid to Google for this ad is a dollar out of my revenue, and a lost customer, and is illegal.



Second link is broken, and the first is about a different thing (triggering an ad based on a competitor's trademark).

Pretending to be a competitor clearly violates trademark law. But I somewhat suspect these fraudsters aren't that concerned with trademark law.



It looks like HN drops trailing periods at the end of links, which breaks this. Here's a working link (yay URL encodings).

https://en.wikipedia.org/wiki/Rosetta_Stone_Ltd._v._Google,_...


And eBay will be made responsible for damages. "I click on that ebay ad and then my computer was locked down and a hacker said I need to pay him to get my files back".


I feel like all the technical arguments here are besides the point. The ad is designed to take you to a page, which tells you a lie, to convince you to give them your money.

We already have a legal term for people that make money by misrepresenting something, it's called fraud.

Sure, you can tell me it will still be a cat or mouse game and that laws aren't gonna reach into whatever sort of clickfarm network exists far outside of US jurisdictions, but make it so people are held accountable for this kinda stuff.

I'm no fan of the carcereal state, I'm not suggesting that we throw people in jail or drone bomb their server farm, perhaps large fines and getting banned making ads across any platform would work. I dunno, just seems there are not many incentives against this sort of behavior in an ad-dominated internet.


In this case, the technical side is also at fault for allowing this (on every other website (well, aside from Google search results), the status bar shows on hover where you will be taken if you click), but I do agree that we very often talk about the technical aspect and not the legal one.

I don't know any country where anyone goes to the police when they had a malware infection. It's a little like countries where there is no point going to the police for theft: nobody was killed so the police has better things to do. Here too, if you're not a huge corporation with millions in damage, they won't even look at it, even if you supply logs that point to an IP within their jurisdiction. (Example of a few years ago in the Netherlands: employer was hacked, hundreds of customer websites taken offline, IP address came from a home connection in the same city as we were in, police took the report straight to /dev/null...)

The only way to get anything done legally is by starting lawsuits yourself, which doesn't work for criminal cases, but oh-so-conveniently works for online copyright infringement.


> We already have a legal term for people that make money by misrepresenting something, it's called fraud.

A nitpick, but if that was the definition of fraud, most of existing advertising would land people in jail. Unfortunately, laws around advertising are way too lax.

It's definitely not a technical problem, but the technical issues discussed are a symptom of it. They're enabling scummy behaviour in order to profit from it.


Well, I can certainly tell you my snakeoil will improve your skin (I'm not making any medical claims and this isn't FDA approved), but I can't advertise that a new study has found that all people named TeMPoral will die within 10 days unless they buy my vitamin with a 100% cure rate.

I agree with you both about advertising being generally an awful thing that is almost always solely designed to manipulate someone rather than give them information, and that the laws are too lax, but this sort of YOU HAVE A VIRUS advertising would be illegal.

(Then again it's barely any worse than the postal junk mail I get designed to look like a sort of official bank repossession notice trying to get me to buy some scammy insurance...)


Google are in complete control of the links on their web pages, so why shouldn't they be held accountable for it? Is there a reasonable argument against this that doesn't again boil down to technicalities? If they're paid money by fraudsters to facilitate the fraud and actively misrepresent the addresses of their links, they're in on it, as far as I'm concerned.


This has been a known issue with Bing and Google for a while now.[1][2]

1. https://twitter.com/sephr/status/1056626456770428929

2. https://twitter.com/sephr/status/1055751684146655232


Every once in a while I'll do a search on Google from a browser with no blocker for something like 'ebay' or some other big brand name and I'm always surprised to see that big brand name has bought ads for themselves, it never made sense since they're always the first search result anyways.

Now I can only assume two things...

1. Some number of those ads were scams 2. Some large number of people just blindly click on the first thing they see below the search box as long as it's close to whatever they search for.

Somewhat related... always surprised to see what search results come up in the IOS app store first for whatever app I'm searching for at the time. It's usually something else, like, search for Uber, first thing that comes up is Lyft.


If a large brand like Uber wouldn‘t buy (really expensive) keywords like „uber“ some of their rivals like lyft could bid on it. So uber would lose a customer who was really interested in uber to lyft. Exchange company names how you like. Its especially expensive for shops etc.

Google will not change any rules to forbid bidding on brand names because they are making a ton of money of it. Think of something like amazon paying more than $1 for every click just to not loose any potential customer. Adwords is a money burning system.


Interesting note -- Amazon does not allow you to bid for a competitor's brand name, nor does it allow you to use your competitor's brand name as a keyword.

In my experience, you can try, but you'll get 0 impressions. Not sure how this works for very generic brand names though (e.g. "band-aid")


They have some rules in place, try bidding on Fortnite.


I imagine that might be in response to the malware advertising that was going on for fortnite a while back, and how quite a large proportion of the players are kids. As a highly visible brand and a highly vulnerable audience, it makes sense for them to focus efforts there and put in special rules.


I think you are correct in why they put the extra effort in, but the criticism is they should be doing this anyway, for all the ads. Not just because the target audience for a specific keyword is more vulnerable to scams.


Wait ... what ... you can "buy" keywords?


Of course, that's what Google is basically. You search for something and they show an ad related to that. And those ads are not random but related to your search.

Or am I not getting some joke?


Bidding on your own company name is very cheap because your site has a very high quality score for that keyword (ie. Google's algorithms think the users query will be answered by going to that domain).

A high quality score gives you an effective discount in the ad auction. You might only need to pay $0.01 for that ad, whereas your competitor would need to pay $1


>> Google's algorithms think the users query will be answered by going to that domain

That's BS semantics. Their algorithms are based more on the age of your domain and other technicalities, rather than some hypothetical meaning of "search intention".

Today you're much more likely to get directed to some SEO-optimised highly monetised blog content because they bought an incredibly old domain name rather than what you're actually looking for.


I think you're speaking to a point the parent wasn't really making. All they're saying is that being known as the canonical source for a brand gives you a steep discount on bidding for that brand name.

We can quibble about the philosophical role of modern search engines I guess, but the basic idea is just that it should be easy to defensively bid on your own thing.


It’s not philosophical quibbling: Google does whatever makes them more money.

For all it is, allowing competitors to bid against each other just makes them more money. And the truth is that the customers who would click on a Lyft ad for “uber” query would not care if it was Uber indeed - as otherwise they would be intelligent enough to find Uber as a lower match.

So frankly “being the canonical source for the brand is a steep discount” is not their policy and I don’t really see how that policy is even motivated, financially or in terms of user-friendliness.


> Their algorithms are based more on the age of your domain and other technicalities

One of which is "dwell time" and another is "bounce rate". Both those correlate highly with how well the site matches the users intention.


And that's why (e-mail, ad, notification, cookie) popups work that great: because the time it takes to actually see the content is increased and thus is the dwell time.

Bounce is mainly irrelevant as we mostly use Google to find a particular page on a website.


Bounce rate is a metric. If the person clicking decides to stay on your web site after the click-through, then that's a good measure of ad relevance.


It may be that the bidder is an affiliate that gets a commission on your purchase, not the brand owner. They're just trying to get last click on the thing you were about to buy anyway.

I worked in AdWords 12 years ago and this was dominant. Not sure how it is today.


> Somewhat related... always surprised to see what search results come up in the IOS app store first for whatever app I'm searching for at the time. It's usually something else, like, search for Uber, first thing that comes up is Lyft.

When this happens it's usually an ad, isn't it? Unfortunately, ad blockers don't work in the app store.


It used to be that on some web browsers, the background color for adwords and organic search was hard or impossible to distinguish. eBay is paying for placement and it also wants to mitigate potential issues where the organic result is not their site, they have top of page presence (if adblocker is turned off).


By controlling more screen real estate you will get even more clicks to your website. Given the incremental cost for a nav query is so low for the first algo it is a good decision.


My project has been having fake ads bought on Google to serve malware for a year now and Google doesn't seem to care. At best they might take down one ad, but there are always more.


They are making money off of it, that's a pretty strong incentive not to care.


> It is difficult to get a man to understand something, when his salary depends upon his not understanding it.


If you're in the US[1] you can trademark your project name, and then Google will pay attention. Of course getting a trademark isn't that easy either so the solution may be worse than the problem.

[1] Back when I worked in the business this was only possible in the US, for legal reasons I didn't fully understand.


I saw this Adwords exploit a year ago for “Apple Support” and wrote a feedback. Nothing happened.


Google blocks the advertiser, but they just open a new account with a new address and credit card number


Woah, that's extra evil... but presumably also easy to track down, since someone will have to have a google ad account in order to do this? Is there a federal prosecutor listening? Hello, anyone?


Am I the only one who had to set their browser zoom to 500% to read the contents of the screenshots?


No, you're not. The image contents were completely illegible at standard zoom.


Interesting that they go through so much trouble to spoof eBay.com and then not try to collect logins/passwords.


My understanding is that's not the purpose of the obfuscation. The parties doing this don't generally want to hack users or compromise accounts, they want people to go to their site instead of the more recognizable one.

If they start actively phishing users this way they're solidly in illegal hacking territory on a pretty massive scale. What they're currently doing is "only" a "growth hack" to get more people on their site instead of the competitor's site.


I agree with you in premise, but if you read the post, the website users were redirected to was a phishing site: https://wpdotjoshdotcom.files.wordpress.com/2019/05/snag-003...


Can someone tell me what I'm missing here? The author is saying that, on hover, it indicates a different URL than what the href actually goes to, which is a much more serious issue than just "HTML element text doesn't match the href", which is also what most people here in the comments are talking about. But then the author calls for a solution of just enforcing that the element text match the href, which wouldn't fix this issue! I'm inclined to think that something's not right in this article.


Heh, interesting example.

I would have been weirded out by this specific ad, because when I worked at eBay one thing that got highlighted in a companywide announcement was that some executive had decided to stop running ads for eBay against google searches for "ebay", because that is obviously pointless. (And that this saved a ton of money on advertising without impacting traffic at all.)

With a result like that, I wouldn't have expected them to go back to advertising ebay.com on searches for "ebay".


It's a famous story: https://slate.com/business/2013/03/paid-search-ads-did-ebay-...

That's probably why the malware author targeted the eBay keyword. Without any real competition, it would be cheaper for them to win top of page placement.



We should fix this with a new HTTP header. Browsers can verify that "expect redirect on next request" ends with the correct destination domain. It would be relatively simple for browsers to implement.

Google can set this on the headers for outgoing redirect link they already use for AdWords and search results. It would simply make sure that after all the 30X redirects, you actually land on the expected domain.

I can't believe this doesn't already exist. It would be widely useful even outside of ad-tech, when you need redirects for country specific subdomains for instance. And by the other replies on this thread it would fix an ancient security hole at the same time. Anybody want to make an RFC?


Google could just display the actual link target on their ads. I'm vehemently against standardizing an HTTP header just to facilitate their continued lying about where their ad links lead. Of course, a user may think twice about loading a link to redirect service when they see the actual URL, and Google wouldn't want that, but that really is their problem.

It's sad that we don't hold one of the largest tech companies in the world directly accountable for linking users to phishing scams from spots they've sold to third parties. Instead we discuss how an HTTP header could make it easier for them to lie. Meanwhile, they get paid for facilitating scams.

> It would be widely useful even outside of ad-tech, when you need redirects for country specific subdomains for instance.

I don't see how that is the case. If I own the hosts and names involved, I know where my links and redirects between the two lead and don't need the client to ensure that it's correct.


That's the kinda stuff that keeps me installing adblockers on computers and phones of relatives that don't know about it.

My aunt's android was slow as hell and had lock screen ads, how is this possible?

Be kind to your relatives, install an adblocker.


Off-topic but related to the source... I just finished reading Dark Pools, by Scott Patterson, and immediately recognized josh.com as being Josh Levine’s website. He’s portrayed in a very positive light by the book, and he seems like the kind of person who would rather avoid attention from it, but I do want to recommend the book as one of my absolute favorites.

It seems his old site about The Island ECN is archived at http://josh.com/oldindex.htm

Check it out if you have the slightest interest in electronic and high frequency trading!


Wow. This is a nice find - luckily I never click on ads links but this is extremely misleading and dangerous.

Imagine a link to your bank or crypto exchange, and it takes you to a phishing site - boom - money gone.


One of those blunders is much more correctable than the other, do not compare them directly in attempt to give crypto exchanges false equivalencies to a real bank.


I'm usually much more pro-google than most on HN, but this is quite bad. I understand it enables other features and a technical work-around that also keeps those features is difficult. But this is a disastrous user experience.

It reminds me of caller-id spoofing. Yes, there's a legitimate use case for it. But don't just throw security out the window to enable it, especially when there is a clear and obvious way to abuse it.


English in that scam is so bad. It matches South Asia profile. I hear it every day. Exact same bad grammar and 100% identical broken sentence structure pattern. It is pathological.

If I were to scam people I'd do my homework. But the infamous "just do the minimum of a minimum" quality issues that region is so notorious for are present even in scams.


This isn’t an exploit per se. it is a misuse of a feature. Google has long allowed the display URL to differ from the actual destination URL, and if this weren’t the case this would cause significant problems for advertisers because many link their ads to offsite tracking domains. It’s a feature, not a bug.


As mentioned here https://news.ycombinator.com/item?id=17126218 , this blog's author is an interesting character, a very early pioneer in electronic stock trading infrastructure.


it Is something of note that someone who helped pioneer electronic stock trading is now working in digital advertising (which in many ways is similar to to trading stocks)


There are similarities - but I don't think he works in adtech. (Based on this article, and as far as I can tell.)


My in-laws were duped by this exact site, but through a different ad on Google. They were ensnared in it for weeks before we found out and told them to cease. :(


> The link is to here (posted as image so you can’t accidentally click it)…

A very tiny image so you can't actually read it either.


And easy to solve: just make google pay large fines for unauthorized brand bidding. The problem will vanish instantly.


What happens when you call the number?


From seeing scam pranks on youtube. You get a scam call center telling you to install teamviewer or something. They run netstat and open the event viewer, tell you your pc has hundreds of errors and people spying on you. If you're lucky they run syskey [0] as form of ransomware. In the end they ask $200 to fix your pc.

[0] https://en.wikipedia.org/wiki/Syskey


Thankfully, Microsoft actually removed Syskey from Windows 10, both because it was no longer a useful security feature, and almost solely used today by these scammers.


Again another link that does not match the title. An actual exploit would be something like an image ad that executes an on-load execution or popup. This is just sloppy ad approval on google's part


Nice catch Josh! JD


Yet another problem caused by the cancer that is advertising.


everything on this page boils down to 2 simple things: 1) google ads should verify domain ownership of destination domains (via webmaster tools, etc) 2) google should expand the "feature" of supporting tracker redirects to allow "final" domain owners to disallow the use of interim clicks (so ebay can simply say: "no, I will always go straight to ebay.com, which I own, and any ad that points to me at any step must also point only to and straight at me")


I assumed this would be the way forward when they introduced Webmaster Tools like a decade ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: