Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> No financial or payment information was accessed or compromised in this attack.

This wouldn't be my first concern. It would be all of the confidential communication that happens within slack.



>"If you have not been explicitly informed by us in a separate communication that we detected suspicious activity involving your Slack account, we are very confident that there was no unauthorized access to any of your team data (such as messages or files)."

Under their FAQ on the post. It could be inferred that there was some unauthorized access to certain users' communication logs?


The post notes that the breached database is the user table, which would not contain chat history. I agree that making this abundantly clear makes sense.


This makes it sound like other data was compromised for some specific users. Since they didn't go into how they know it was only for only these users, I'm not very confident about this.

> As part of our investigation we detected suspicious activity affecting a very small number of Slack accounts. We have notified the individual users and team owners who we believe were impacted and are sharing details with their security teams. Unless you have been contacted by us directly about a password reset or been advised of suspicious activity in your team’s account, all the information you need is in this blog post.


This is actually an interesting point. A compromised user table could conceivably be used for all sorts of nefarious purposes. If the attackers "having access" to the information in that table includes the ability to modify that table, then it is pretty much open season on Slack. For example, an attacker could replace a target user's password-hash with a hash that the attacker knows the plaintext of. Depending on the implementation of the random salt, the attacker may have to replace the salt as well. Then, the attacker logs in as the user, downloads the desired chat history, logs out, and sets the password hash to the original. Not enough information was really given in the blog post, but by the sounds of it, some teams experienced more targeted attacks.


I would suspect things like "being used from a completely new country" or something similar. Could be those are the accounts with weak passwords that the attacker tried the top 10,000 passwords against.


If you get the user table, you can log in. If you can log in as (some) users. If you can do that, you can see (some) chat history.

edit you can log in if and when you crack some of the hashes.


Incorrect. You can't login with a password hash, you need a password.


If you get the user table, you can crack the password hashes offline, at your leisure.


While technically true, this seems like it would be computationally infeasible, or at least impractical, given that they were not just hashing but also salting the passwords.

Of course, I barely know anything about computer security, but at least it should prevent attacks using rainbow tables I think?


No, a simple password like "slack123" should be easy to crack with any usable password storage method.


Not necessarily easy. If they're using a decently high cost for their use of bcrypt, we're talking hours to days (or more) per user, even when only considering weak passwords like that.


True, I guess it's possible to crack a password for a single user, especially one with a weak password. I was more thinking that it's unlikely they'll be able to crack the passwords of everyone who was in their database, and given that Slack has so many users it's unlikely for any single person that his/her password will be cracked.

Of course, even if they can't steal everyone's passwords, maybe the hackers will try to crack the passwords of higher profile targets.


GPUs are fast enough to crack a very large percentage of passwords in a short time by brute force, if a simple algorithm was used, even with salt.


With a separate salt for each password not even the NSA can crack that (that we know of). With a single salt for all of them, maybe.


Sure they can. Anyone can. It just takes a long time per password to crack (that time is a function of the cost/# of rounds of the hashing function).


No kidding. That's why I put (some) users. Because brute-forcing the hashes will give you some password plain texts.

I guess that I missed a step in the explanation where you attack the hashes.

However I see that they say that they are using some best practices (bcrypt, "salt per-password") so this attack will be largely mitigated.


Depends on the nuances of the system. If you can pass-the-hash, you can get in.


Agreed. The content of the chat's would be potentially much more important in my mind.


Which leads to the question if slack encrypts the chat data in the database.


That would make implementing search quite hard so I'd say - it's pretty likely they don't encrypt it.


If anyone from Slack is reading this, the encryption should be an option, even if it means disabling or substantially slowing the search feature.


If they encrypted it, Slack would have to hold the key, so that all users in an org can then read existing messages.


No, it could be a private key shared among users.


That's not right. There is no need to store text body in order to index it. Furthermore, you can implement an index of token hashes, rather than an index of tokens.


It would remove a lot of nice search features, however. If you just index tokens without positional information, you have a much harder time performing phrase matching. If you include positional information, you can probably crack the encryption because some tokens are statistically more likely to appear next to each other than others.

If you index shingles (phrase chunks) instead, you lose out on sloppy phrases...you can only match exact phrases. I imagine you can perform a similar statistical attack too.

Hell, just getting the term dictionary would probably allow you to reverse engineer the tokens, since written language follows a very predictable power law.

Hashing also removes the ability to highlight search results, which significantly degrades search functionality for an end user.

Basically, yes, you can do search with encrypted tokens...but it will be a very poor search experience.


If they dont encrypt storage they are highly negligent. Index and search are done in RAM,which is slightly harder to steal than disk data.


This reminds me of the plot of Silicon Valley


Is there a good reason to keep chat data longer than it takes to deliver it to the recipient?


They archive chat messages so that you can search through them later.


That alone would be a great reason not to use them.


It's also a great reason to use them, isn't it? Your searchable chat history basically becomes the knowledge base of your company.


And a great target for discovery in any sort of lawsuit.


As is email.


To me, that is something that you should keep internal, on internal systems with vetted free software.


https://slack.zendesk.com/hc/en-us/articles/203457187-Settin...

It's configurable for paid accounts, and can be set as low as one day. However, one of the best features of slack (and products like slack) is message history and search. Otherwise, IRC isn't all that different (WRT messaging).


It's why I love Slack. If I remember a conversation about something two months ago I go to the room, search and find exactly what I needed.


Maybe it's time for Slack to adopt the Axolotl ratchet, too.


I'd love for them to do that, but there's a couple of problems that they'd have to overcome first.

First: Slackbot. This is a Slack-run bot that's in every channel; team owners can customize it to do various things, like scan messages for keywords and give out canned responses. Even if Slack adopted some variant of encrypted chat, each message would still need to be readable by Slackbot, so Slack would still have the means to collect every message.

Second: channel history. When I join a channel, I can see the messages in that channel from before I joined. This means that Slack (the server) must be able to give me those historical messages. In an encrypted group chat, the messages are encrypted only with the keys of the participants at that time, which means newcomers can't read them.

I'm sure there are other features in conflict with end-to-end encryption, too; these are just off the top of my head.


The first could be solved by having the activation part of the bot run on the clients themselves, and only send those messages in a readable way to the server.

As for the second, the server could ask one of the clients to re-encrypt the channel history with the newcomer's key. It would only fail if nobody was online the moment you joined the channel (and you still could get it later).


My concern are the usernames, emails and phone numbers that were probably not encrypted


ultimately passwords can be changed; internal chat messages regarding personal and confidential data can not be taken back.


User metadata can be used for social engineering, and people are typically the weakest link.


Exactly!!!

Encrypting user data should be a common practice like hashing passwords.


> Exactly!!! > Encrypting user data should be a common practice like hashing passwords.

I get the feeling that you've never done this before and you don't understand the technical challenge and implications of the added complexity you propose here for an essentially free to low-price all-in-one communication online service.

Slack is not the NSA, encryption is not the answer to every security problem out there.


Third party authentication should be the norm. Leaving authentication to providers that absolutely know their shit, just like we leave payments to third party services.

Of course, that requires a decent protocol, and Mozilla is doing the world a disservice in not marketing Persona better seeing as it's the right solution....


Major privacy issues, single point of failure etc etc. We leave payments to third party services because nobody wants to deal with the compliance nightmare that PCI-DSS is, not for security reasons. Payment is also mostly less sensitive to availability and latency issues than authentication.


So in a world where PCI-DSS isn't a thing, you're fine entering your credit card data directly on the forms available on random websites?

Why's a password so different, seeing as most people reuse those passwords? Why do we essentially allow (and yes, I am excluding those that use password managers in this statement, I'm one of those) access to our webmail and other critical services to random websites on the internet? What makes this right?

> Payment is also mostly less sensitive to availability and latency issues than authentication.

That's patently untrue. Latency issues are nonexistant in both areas, and availability issues are critical in both areas.


Yes, I have no problem entering my credit card data directly on the forms available on random websites.

Credit card payments online are so ludicrously insecure that it baffles me it's even legal. I only use them when dealing with the US (although some of the major retailers like Apple have finally started accepting 21st century payment methods), and I simply assume my credit card info has been leaking all over the place for ages.

The whole basic premise of credit cards is "we know it's totally broken, we'll just refund you the money because it's cheaper than fixing the problem".


> So in a world where PCI-DSS isn't a thing, you're fine entering your credit card data directly on the forms available on random websites?

Yes. It might be a hassle should someone misuse it, but the status-quo effectively means if I didn't make the purchase I'm not responsible for it.

More importantly, this was proven before PCI-DSS was a thing.


You mean like how Authy specialised in two-factor authentication, but still managed to have basic string concatenation bugs that rendered their entire 2FA system bypassable?


Huh? This is the first I've heard about this, and searching for "Authy concatenation bug" isn't turning up anything useful.


Here's the write-up from Homokov. The guy is a pen-testing genius: http://sakurity.com/blog/2015/03/15/authy_bypass.html

But if you just want the money shot: http://sakurity.com/img/smsauthy.png

Yes. Typing '../sms' in the field bypassed the 2nd factor. Just, wow.


Huh. Well now I know. Thanks!

Amazing what you can do with improperly-implemented input sanitation :)

This probably could've been prevented by disallowing non-number inputs, no?


"In fact the root of the problem was default Sinatra dependency 'rack-protection'".

They were doing the input sanitation, but it wasn't the very first thing in the processing pipeline, since "best practice" was to pipe everything through 'rack-protection' first.

Homokov was first to state, this was really a black-swan type bug which 99.9% of the time makes it into production. Apparently, they were doing the "right thing" and still got burned.


The parent meant "This probably could've been prevented by disallowing non-number inputs" in SDK libraries. Yes, if SDK would cast everything to digits it wouldn't be possible. It is also quite obvious security-in-depth for a 2FA API. Now they do it.

*HomAkov


Or even just input validation on the form itself before passing on to the API, which is more of what I was getting at. I don't know about the details of Authy's setup, but I know that AJAX (for example) supports enforcement of specific value types in text fields.

Basically, the form itself could have (and maybe even should have) required numeric-only values, seeing as Authy's codes are either 6 or 7 digits long and contain no alphabetical or special characters.


:-( Sorry, typo. And HN won't let me edit now, grrr!



hey, that causes some immediate stir in my mind as a user of Authy. Could you share any reference to the incident you mentioned?



... no? no I don't mean like Authy.


You never decrypt a password however. You only compare the hashed version of the claimed one to the stored hashed version, a one-way operation.

What could you do with a one-way encrypted phone number? I'm not able to enter a phone hash to make a call.


Encryption isn't the same as hashing. Encryption is two-way.

The previous comment did make the encryption / hash distinction - though I can totally understand how his post might have been misread that he was recommending the same mechanisms for both sets of data.


OK, so slack stores a username, name and email address for each user. This is visible to everyone else in the same Slack team at minimum. You also need it for e.g. password resets, perhaps billing.

We can assume they aren't total idiots and there's a Internet facing application server that connects to a internal-only database server that has this data. Also, assume SQL injection is not the attack vector.

How would you apply encryption to protect the username, name and email from an attacker that has gained access to the application server? I've gained some shell on the server and have 24 hours minutes to extract data. I can see all the files on the server but maybe as non-root but just the user that runs the application. How can you, as a security sensitive application developer, stop me if I've gotten so far?


I wouldn't. I don't agree with his point either (see my response to him: https://news.ycombinator.com/item?id=9277659).


Why? Encrypting e-mail addresses would break password reset features and phone numbers are generally public anyway (yes you can go X-directory, but the real issue here is why these services require a valid phone number to begin with)


Why would encrypting email addresses break password reset? You can encrypt the database at rest such that the application has a private key that can decode it. That way both the application and the database server need to be breached to obtain anything usable.


It's often a bug in the application that exposes the database, so the same bugs might also be used to expose the private key.

It's also worth noting that it wouldn't just be the web servers that require your private key, it would also be any mail servers you use for sending your newsletters and such like (assuming these aren't run on your web servers - which often isn't the case). Then there's your telephone support staff, who would also may need to know your e-mail address so they could do their job effectively. And any other operators that might compile data extracts, eg for 3rd parties where users have given permission for your details to used / sold.

Quickly you're in a situation where your private key is more available across your infrastructure than the e-mail would have been if it wasn't encrypted to begin with.

Now lets look at the cost of such a system. There's an obvious electricity / hardware cost with the CPU time required to encrypt / decrypt this data (after all, CPU time is the general measure for the strength of encryption) and the staffing cost with the time wasted jumping through those extra hoops. The development time, code complexity, etc - it all has a cost to the company.

So what's the benefits in any companies doing this? They don't gain any extra security? This is really more of a privacy policy for their users; and users which are that paranoid about their e-mail address being leaked should either use a disposable e-mail account or shouldn't be using a cloud-based proprietary messenging network to begin with. What's more the chat history might well have your e-mail address in anyway (eg "hi dave, I'm heading into a meeting shortly, but e-mail me at bob@example.com and I'll have a look tonight")

Don't get me wrong, I'm all for hashing / encrypting sensitive data. But pragmatically we need to consider:

1) are e-mail addresses really that sensitive? Or instead should we be encouraging better security for our web-mail et al accounts (eg 2 factor authentication) to prevent our addresses being abused. Given that we give out e-mail addresses to anyone who needs to contact us, I think the latter option (securing our email accounts) is the smarter one

2) instead of encrypting phone numbers and postal addresses, should we instead be challenging the requirement for online services to store them to begin with? If they have my email address, why do they also need my phone number? Postal address I can forgive a little more if there's a product that needs shipping or payments that need to be made.


Or just the application. Generally, it's much easier to convince apps to give you the data instead.


It's still worth mentioning, even if it's not your "first concern".




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: