Hacker News new | past | comments | ask | show | jobs | submit login

The thing is you need MFA to log into Slack but having a valid cookie bypass that.

On top of that once you're in Slack you can access every public channel, search string etc ... I can tell you that large compagny have a lot of thing displayed on Slack ( CI/CD pipelines results, credentials in logs, alerts ect ... ) you don't even need to talk to someone to have a lot of info.




I work with a large enterprise company that is actively thinking about this problem within their organization. The long and short of it is that everything in Slack is ephemeral past a month or so. Everything gets wiped on a rolling basis, including all media and messages.

If you want some domain to be documented, it happens outside of Slack. Secrets go in a dedicated secret management resource that requires 2FA for every login with strict timeouts and audits.

For the team I work on, this means piling more crap in Jira and Confluence. If a decision is made over Slack, that decision is then codified in a ticket or in a Confluence document. This also means some people constantly send links to the same confluence pages over and over again since there's no history for someone to search through.

I think overall it's a decent solution if you're diligent managing the tradeoffs. I can't really think of a better way to keep things off of a platform where they shouldn't exist, other than taking the nuclear option like they're doing now (albeit with a generous countdown timer).


I don't see how the policies you described would help at all in this situation. The main use of Slack here was to get the one-time-pass, which allowed them to login to the corporate network.

If they did this in your company, then all they'd have to do is scan through the Slack channels till they found a link to the internal company Jira and Confluence sites and then they'd have free reign to start mapping out your network and prepare for an attack.

I think an effective mitigation that could be implemented on the Slack side would be to sign the cookies and include the origin IP as part of the cookie. If you get a request with a cookie issued to a different IP, then you invalidate it and have the user login again.

This might be problematic on mobile devices, so maybe another option might be to include a device id and a nonce in the signed token and each time the cookie is used to establish a connection, the device is issued a new signed token with which to establish the next connection. If a user logs out on a device or the same token is used twice, then Slack could immediately invalidate all tokens.


This is a concern of mine on systems I design/maintain.

How do I mitigate a stolen cookie from successfully authenticating someone else?

Do I store user browser-agent/IP and check that on every request?


I think we are living in a post "IP address" world to be to totally honest.

I can often switch thru many IP addresses in an hour (especially travelling) - various WiFi points, 4G, etc. Services will appear incredibly broken these days if they require a new login per IP address.

Obviously you could force all traffic to be routed thru a VPN and list it there, but it seems people are moving away from that approach.

To me, the better question is how are these cookies get stolen in the first place?


IP based auth is super annoying for legitimate users since it logs them out frequently.


If it's an internal corporate system where all the users sit at assigned machines and have fixed IP addresses, yes you can do stuff like IP address checking.

Otherwise you probably need short-lived cookies that get renewed by the client in the background, with a hard expiry of some reasonable "work day" length such as 8, 12, 16 hrs. Then even if it's stolen, there's a fairly short window of time that it's useful to anyone.


As long as your authentication scheme is based on a bearer token, you can't really prevent it, but binding to IP and setting short expiry can help motivate it.

If you want to avoid this, you have to use something in your authentication scheme that can't leave the device/user, so we're talking certificate or other public key crypto based schemes.

TLS mutual authentication is one common tactic for this, although the scenario itself is uncommon.


In my opinion you don’t. Rely on the authentication provider to handle that responsibility. Services like duo/Okta perform this risk assessment and may opt to request a mfa request.


I've never wanted to completely hand over authentication to a third-party.

Instead what I'd think I'd like is just the risk assessment to be be performed by a third-party when I'm handling authentication (i.e. a third-party that has a broader view of what's happening across multiple services over time). I just send the pieces of information that I'm willing to share as an API call and they make the best risk assessment they can.

Then I can take that risk assessment result and make a final decision if authentication succeeds or not.


There are risk services out there.

https://sift.com/ Is one you call out to that gives you a risk score.

https://datadome.co/ can sit within your cdn layer that does risk assessment.


That's not always an option.


You can downvote all you want. Some projects are sensitive enough to not allow third party authentication (military systems anyone).

Besides, if you're large enough it makes business sense to do it yourself anyway.


If the client device has a TPM or some sort of hardware that can manage the secret you can leverage that. Otherwise, protecting against "attacker has a valid session" is not very easy. Even in the TPM case attackers with code execution on the device can likely bypass it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: