Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm agreeing with the "no" votes here. It's extra work and testing complexity for a one off case that's trivially avoided by spammers.

And where do you draw the line? Should it flag a link with text "htp://ebay.com" that goes somewhere else? "ebay" with a href somewhere else?

There's no technical workaround to educating users.



I'm partial to a simpler and complete solution: just always force display of href text on hyperlinks, ignoring the markup that's between <a> and </a>. Nothing good comes from displaying the text/image instead of the actual URL; at best it's used (usually overused) for extra aesthetic touch that's not otherwise useful, at worst it's used by advertisers and scammers to lie to people about the link's destination (tracking links and phishing).


I guarantee you that I can craft URLs users will click even if the whole URL is exposed. This "solution" makes developers feel better but provides essentially zero additional security.


Can you give an example? And will it never help?

I think it’s silly to argue against this. It’s like saying “computer security is hard so why bother at all.”

It’s a continued arms race, where you continue to make things harder and harder. This strategy is working the rate at which people are hacked on platforms like iOS is a fraction of what it used to be like for general computing. There will always be security holes, but you plug them as you find them just as you create mitigation’s against classes of problems to the best of your ability. Why make it easy for the attacker?


Never is a silly standard to measure against.

I'm clearly not saying don't do it at all, I'm saying that this approach won't succeed at anything other than making developers feel like they're Doing Something™. Actual spam filtering and 2fa are examples of real security, showing users the URL is an example of security theater.

The corollary is the nonsensical "security is hard so let's force non-technical users to do it".


Sandboxing to prevent malware installation and password managers to prevent phishing are excellent technical workarounds for this.


For the latter, as one of my sibling posts points out the user will go "Huh, stupid password manager didn't fill out my password - I'll have to do it manually". Often the password manager even helps them do this in the name of user convenience.

Only Brick wall UX works. That's what WebAuthn does here. Don't offer the user a way to "continue anyway", don't ask them confusing questions, just a brick wall and no way forward.

The user will probably be emotional. Scammers work hard to make your users afraid, or horny, or confused, and so they really, really want to give their bank credentials to https://honest-this-is-your-bank.not-a-scam.example/tmp/back... and nothing you can tell them is going to make them stop wanting to do that.

The brick wall doesn't care about the user's emotional state and will stubbornly resist. Maybe the user will eventually realise it was a scam, maybe they won't, the brick wall doesn't know or care either way.

Brick Wall UX even helps the software engineer. When a manager asks if you can't just add a banner that says "Hi, this is our new web site, please continue to use your old credentials" and thus undo every second of training about phishing your users ever got the answer with a Brick Wall UX is that literally can't work. No matter how much they beg and cajole and swear it's just a temporary workaround, it will not work at all, so they're just going to have to go tell the Big Boss that no matter how much was spent on new-brand-name.example the login system will have to remain forever on login3.long-forgotten-brand.example because that's what some idiot picked five years ago then it was set up and too bad.


Can you elaborate on what "brick wall UX" is like to use as a user, or how one implements it? Are there known examples you could point me at? (It's a term I haven't heard before, and haven't noticed anything when searching.)


I don't think it's a known term; it seems something GP created. As for how it works, the concept is simple: if the user wants to do the wrong or potentially insecure thing, just don't let them, period. That's the brick wall.

I'm of two minds about this personally. On the one hand, I appreciate the argument that the only thing that can prevent businesses from doing something bad, stupid or abusive is if it's legally, physically or by design impossible. On the other hand, as a pro user, I do appreciate the ability to override software when it mistakenly tries to prevent me from doing something.


Unless the developer has enough foresight to account for, and sufficiently handle, every single instance where that brick wall would stop a legitimate false positive, I stand vehemently against them.

There are enough examples in the past that clearly demonstrate that developers are not benevolent or competent enough to have complete and final control over the software their users run. Sometimes this control even results in the exact opposite of what the developers originally intended, as was the case with firefox addons just a couple of days ago.

Ultimate control over software should always reside in the hands of the user.


> as a pro user, I do appreciate the ability to override software when it mistakenly tries to prevent me from doing something.

This is really the core problem in security; the world is designed for people like this by people like this, without any serious thought for the implications for the overwhelming majority of users. Do not include "I know what I'm doing" escape hatches, and security will magically get better for the many at the expense of convenience for the few.


Sure. That's why I'm not too big a fan of security. The flip side of your observation is this: the most secure form of computing is a rock. You can ensure users can't be pwnd and can't selfpwn themselves by making the device as useless as possible.

Let's give every user a tablet that has two buttons. You press one, you get a new cat picture. Press the other to "like it". That's all the user needs. All data exchange is end-to-end encrypted from the cat picture provider to tablet's input&video drivers - can't risk the spooks^Wcompetition knowing what they're looking at. They don't need to do banking - like everywhere else, they just sign a three-party contract with the tablet provider and the bank. This way, the Bad Guys can't steal users' money! Oh, the users also want to watch pictures of squirrels? There's a separate tablet for that pulling from separate provider; it's insecure to let these mix on one device!

Seriously, this is how the world would look like if security got its wish. There is a point past which security is essentially enslavement, and that's true both in physical security and computer security.


I gave the best example around today already: WebAuthn. Its ancestor U2F has the same behaviour.

The credentials in these protocols depend explicitly on the verified FQDN of the server (and thus you can only use this with HTTPS). When scammer.example asks for your credentials there literally isn't a way to give it credentials for realbank.example. No matter how sure you are that you're a very smart person and definitely need to give scammer.example access to empty your bank accounts, no way to do this is available. Maybe next week you'll still be angry you couldn't do this, maybe you'll realise it was a scam, don't care.

This is why Google reports zero successful phishing for their own systems. U2F is mandatory there. Their employees aren't magical, some will fall for scammer.example and they will be really frustrated that they can't use their Google login like it says, and some will scream at their help desk team about how stupid this is, how it's totally broken, and even after they demand that the help desk person be fired and they change their password six times and write a ten page rant on their blog they still can't give their employee credentials to the scammer and Google remains safe.


Except they're not, because a dumb enough user isn't going to think about their password manager, and they'll enter their password anyway.


They won’t even know what it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: