> E.g address X sends $100MM worth of bitcoin to two addresses, with ~$1k going to address Y and the rest going right back to address X.
What you are likely looking at is not fraudulent and is a characteristic of bitcoin's UTXO design in almost every transaction that doesn't deplete a wallet. If it didn't send the remaining BTC to itself, it would be the "mining fee". So you see these transactions where you see the remaining change is sent back to the same wallet.
Do your engineers store infrastructure secrets, (like AWS Access Keys / Secrets) within it?
The instructions indicate that these "Secure Notes" are likely compromised and an adversary has the ability to decrypt them. If your answer was yes, a bad guy has easy access to your environment.
Additionally, if you're feeling extra cautious, you should look into malicious activity within any dashboards or logs provided by apps you authenticate with OL into. For instance, any sort of "recent logins" feature.
Lastly: It's sort of unclear to me what the exposure for any potentially leaked multifactor integrations might be. For instance, a DUO integration + secret key, if they leaked, and if a credential roll for MFA integrations need to happen.
- How does your engineering team track new "debt" after releasing code? (if at all, and why not)
- Do you pay anyone for centralized logging, or wish you didn't? Are you making it useful?
- Do you feel like your company is good at managing access when hiring / firing people?
Otherwise thanks for any feedback, I enjoy writing these!
Can only speak about my corner of a very large organisation;
- Technical debt of custom coded solutions is a known issue across our organisation. New strategy is to move to market solutions, therefore outsourcing the risk to organisations with (hopefully) better code management than we have. For my corner, we don't have technical debt measured accurately enough for my liking.
- Yes, we pay for an use centralised logging. We've actually been through two solutions, and are now moving to a third due to various factors (cost, integrations, speed, out-of-the-box metrics). Integration into the centralised logging system is part of our Request for Tender marking criteria.
- Relatively good at disabling access after someone leaves. We integrate as much as possible to a central repository. It's just the outliers that tend to last beyond someone in the organisation. Critical systems are absolutely shutdown within 24 hours of a leaver departing (usually immediately if they're a bad leaver).
When you use SaaS products, auditing the code is not a service they offer. You have to rely on certifications from independent certifying organisations, etc.
Alientvault: Ok...we probably didn't get full potential here
HP ArcSight: Extremely powerful, especially the normalizing of logs across similar system. Requires a team to manage though.
Splunk: Our business isn't ready for cloud based hosting of centralised logs. Otherwise, we'd be on this already. From my perspective, purely from a reduction in complexity to pull useful information (not just Security).
Thanks for writing this, really insightful! A question: What's your advice on how to store secrets on the server-side?
Currently, I mainly use a seperate "secrets.yml" file that gets deployed via Ansible and is stored there encrypted using Ansible-Vault with a strong password. Is that a reasonable approach? What is your opinion about storing secrets in environment variables? It seems that some people advise this over storing them in files, but I have seen some cases where environment variables can be exposed to the web client as well.
I don't like the idea of keeping secrets in ENV and limiting it to config, though it's the kind of thing I'd ask other folks about myself to understand any tradeoffs. I see Kubernetes and other things supporting secrets in env variables so unsure how common it is.
The big win is simply keeping secrets out of source code, out of an general engineer's copy/paste buffer, and with errors not going to a logging platform with single factor access. Your likelihood of a short term incident decreases dramatically. Especially if those secrets have well segmented access, (IE, not a single AWS key with `AdministratorAccess` everywhere).
If your code adopts a convention of reading secrets from the environment, you get a lot of flexibility in how they're actually stored; you can put them in protected files and export the contents of the file before running the service, or you can have a tool that works like "env" that populates from a secret store. Your secret storage system can get more sophisticated without your code having to change.
I wouldn't recommend putting them in /etc/environment or /etc/profile or /home/service/.profile where you'll forget about them, though.
Just as a strategy for passing secrets to code, I like the environment a lot.
The gist seems to be that it's easy to accidentally leak environment variables (which is why I think the top comment is off-base). tptacek, do you think this risk is overblown?
It's good to be aware of the fact that environments are inherited by child processes (as are file descriptors), but I don't think that's a good reason to avoid using the environment.
Have you heard of torus.sh keyrings? I don't know how well it works for an organization, but integrating torus into my side projects has been painless.
- for network, and security stuff, absolutely: splunk is the bees knees. For apps, each team tends to run their own mix (graylog2/elk/custom). Have pushed for more security type events from apps into splunk for correlation, but it just costs too damm much.
- depends on the region. I find US / UK do okay, but the more emerging/growth markets where we have employees, the worse it gets.
You said this: "Rarely do I see a team eliminate all of their debt, but the organizations _that least respect_ their debt never get so far behind that they can no longer be helped in a breach."
Do you mean instead "that _at_ least respect"?
I ask only because they two have different meanings.
Thank you for writing these. These blog posts are my go-to resources when my client companies want to learn more about what they can do to improve their security posture long term. It's a really great series.
I'm curious how typo and bit squatting would come into play here, and if attacks leveraging them could collect private keys at a dangerously high rate before people can patch their clients.
Products like heroku, or the stripe CTF, or other things that come to mind that operate over SSH going rogue a bit scarier. If one were to be compromised it would be a case where mass amounts of private keys could leak. AWS, github, all cloud VPS providers, etc.
Multifactor is relevant as a defense with a vulnerability like this.
We're building out our security and engineering teams. We are based out of San Francisco, and have remote engineering options. We're a company that cares deeply about our security engineers and how they improve our security every day, and we are looking for more.
We're looking for engineers to build new security features for Coinbase, secure our customers, employees, products and infrastructure from all sorts of threats. We're doing a lot of building, and looking for builders. Today, we're a Rails+AWS shop, with mobile apps and lots more technology being built on the backend. We're also building a culture and a company, so you should care about that stuff too.
We're looking for software engineers, systems engineers, and security engineers... or whatever combination you might be. You should have no problem thinking like a bad guy and be up to date on building defensively. You shouldn't be afraid of an incident and you shouldn't be afraid of getting your hands dirty on new technology.
We've setup some fun tests (On HackerRank) to make sure everyone has a fair shake for an interview (Resumes can only tell us so much anyway) Choose one or more that suits your skillset, have fun, and hope we can talk soon.
Hi, Ryan here - We've moved over to hackerone.com/coinbase, and emailed everyone at the whitehat@ address about the transition. We'll be getting in touch for the details and will get an autoresponder up on whitehat@. We don't view missed reports as a good thing, we'll do better and have already made improvements.
Hi - I built Facebook's Bug Bounty program with a few other FB folks. There's a couple things I want to add to the conversation about how we look at rewards.
(Also, in 2009 it was just myself and a couple others running our disclosure program. It wasn't even bounties at that point. We'll get you a shirt, you can pretty much just blame me for that.)
1. We don't compete with the bug market, so our rewards will not look like market prices. It's true that "Bad Guys" would pay enormous amounts for a bug. They also pay a premium for the criminal risk being taken, and for the opportunity to exploit it which will theoretically make them a lot of money. However, we're good guys and we don't plan on profiting from bugs.
2. You, the researcher, are safe to post and talk about the vulnerability you found when Facebook is held to the disclosure policy. If your bug is extra-awesome, we'll sometimes send a bunch of reader traffic your way from our bug bounty page. This has shown to be worth a lot to researchers. Several of our bounty hunters have started companies, gotten jobs, became internet famous from this program and value this more than any bounty.
3. We are pretty lenient on what qualifies as a bug, which means we have a higher volume of payments to researchers than you might expect. If a researcher showed amazing skill in finding something that didn't actually turn out to be a bug, we'll probably reward them anyway because we want them to keep trying. We are pretty lenient on duplicates as well. If we see that someone truly discovered a bug independently (and also showed significant skill discovering it) then they'll probably get a reward too. The theory here is that we want more responsible disclosures instead of pissed off researchers.
Overall I don't want to argue with the amount we rewarded here, but show that we're doing a lot of stuff that's benefiting a lot of researchers. We're one of the first companies to launch a bounty program, and most of the researchers you have listed would probably say they think we're doing pretty well. Not too many companies have a bug bounty program, and I'm really proud of ours! :)
What you are likely looking at is not fraudulent and is a characteristic of bitcoin's UTXO design in almost every transaction that doesn't deplete a wallet. If it didn't send the remaining BTC to itself, it would be the "mining fee". So you see these transactions where you see the remaining change is sent back to the same wallet.