> In an interesting turn of events, the investigation of the whole SolarWinds compromise led to the discovery of an additional malware that also affects the SolarWinds Orion product but has been determined to be likely unrelated to this compromise and used by a different threat actor.
Is anyone ever pissed that one exploit getting caught reveals other hacker’s efforts?
Like how that amateurish but high profile Wannacry attack revealed a much more lucrative Monero mining botnet that was running with the same exploit for weeks longer, but some script kiddie ruined it
> it was exploiting a weakness and preventing others from doing the same.
This is also a solution (albeit with a race condition) to one bug that the Morris worm had: the same worm infecting a host multiple times and the multiple instances drawing more attention / stepping on its own feet. Other worms likely have had similar flaws.
(Any suggestions to eloquently un-mix the worm/feet metaphors?)
This makes me think of a common refrain when dealing with parasite infestations: If you see one, there's way more than just one.
Deterministic builds cannot come soon enough. And really, builds are not enough, we need to be able to extend confidence in the execution of the programs we write much deeper than just builds.
This doesn't do anything for people who buy SolarWinds Orion, which is a closed-source off-the-shelf tool that gets picked up everywhere because of a combination of good sales tactics, compliance checkboxes, and ability to remove work from all involved.
Going back up the chain, a technical solution probably won't solve the issues inside SolarWinds either. Systemic organizational issues lead to RCE backdoors and implants distributed on official update servers, signed with authentic keys.
Deterministic builds can be done with closed source too. It doesn't directly help the users, but if they had setup a second build machine and noticed the build output was different, they could have addressed this sooner.
Of course, if following best practices, all build machines should be equally compromised. ;p
How is this possibly acceptable? We've given people verifiable proof that this binary is not the one we created, yet users should crack on and install it anyway?
I wonder if you could gain security while preserving agility by having build servers with exceptional (and annoying) security maintained offline. Do your CI/CD work, then chop off a weekly release and build it from source on a machine that’s been powered off in a secure room the whole time.
Still doesn’t help you if the attack is sufficiently upstream..
I am curious how this code actually made it in, based upon the following:
> The fact that the compromised file is digitally signed suggests the attackers were able to access the company’s software development or distribution pipeline. Evidence suggests that as early as October 2019, these attackers have been testing their ability to insert code by adding empty classes.
Unless this a compromise of the build machine, it sounds suspiciously like a lack of code review standards to me.
In our organization, the only way to get a line of code into master is through a process where a 2nd developer reviews and approves via GitHub. Branch protection rules are really nice for this kind of concern. Obviously, the attacker can hit right after cloning source, but it helps to know your foundations are clean regardless.
All you need is to be able to influence the behavior of the build system at runtime. If you have that, you do not need access to the source code, and you do not need to check anything in. This includes scenarios where the tool chain itself is checked into the source control system.
At runtime, you alter the build system to notice if a new compiler binary is being built. If so, inject code that, from then on, invisibly "injects the injector". Once you've shifted to this new compiler, the compiler binary itself is the attacker.
Since you have an injector, you can put other types of detections and alterations in...like watching for the compilation of the initialization of a Solarwinds DLL. Then you inject in what you need. No source code is involved.
There are probably lots of ways of getting around this, but unless you're actually looking for it, you won't see it.
This is really interesting - really seems to be an argument in favor of all code being open, shared source - along with deterministic builds, it would be easy enough for any organization to build Orion themselves and verify they get the correct build hash. Or if there was a disagreement in hashes between SolarWinds and their imagined community, it would serve as a red flag.
This code was added to the Solar Winds source codebase, if I read the article correctly. No check in the world is going to catch it if it's actually in the base code.
Carefully compiling a corrupted program doesn't fix it.
If you view GitHub as a static infallible source, then yes -- your analysis is correct. But there are gaps anywhere. If you want perfect 1:1 mappings between source, the developers who make it, and the end builds, you essentially need a "chain of trust" that can be tested at every stage. For example: are all developers pushing code with encrypted SSH keys? Are the commits signed? Are the signing keys hardware backed? Does CI check the signatures? Is the CI server up to date? Are all packages on the CI server signed and trusted? Are all stages of the build pipeline testable for tampering and tamper-proof? You're not curling or apt-getting or running npm anywhere in your build server for some kind of Slack integration, right? The list goes on.
The issue is that most developers view "the code pipeline" as a trusted and complete system, for the most part. In the vast majority of cases, that's okay. The issue is that SolarWinds should have known, based on their very own customer list, that they were in an advantageous position in many organizations that are valuable targets. That should have _caused_ all of this thinking to happen, and led to changes internally to accommodate the new risk. That threat modeling/analysis either didn't happen, or the outputs weren't good enough.
They are claiming that their build system was compromised and the code was not under source control.
> Based on our investigations to date, which are ongoing, we believe that the vulnerability was inserted within the Orion Platform products and existed in updates released between March and June 2020 (what we call the “relevant period”) as a result of a compromise of the Orion software build system and was not present in the source code repository of the Orion Platform products.
In my experience and where I work, the build system tends to be the most neglected part of the pipeline, most trouble-prone and frequently the source of headaches nobody wants to bother with. I think the days of build being the red-headed stepchild nobody wants to deal with is coming to an abrupt end.
Spoken like a true dev There are other ways of checking code in besides the official channel. Almost every company on the planet could fall victim to this type of attack. Once a team gets past a certain size and “its not my job” comes into play. All kinds of doors swing open.
It’s too obvious even for a huge app. The empty catch alone is something I’d immediately “git blame” if I saw it. I work on a 20 year old massive enterprise app and there is lots of “not my job”, but someone would see it.
Also, it would likely (or hopefully) trigger a static analysis warning in the build as soon as it’s added. For such a sophisticated attack this would be too much of a weak point. It would be much better to have access to a point in the build system that enabled you to inject that code in or after the compilation, e.g by tampering with the tool chain on the build machines.
The issue here is that the code was added at build time. Do you do code-reviews after decompiling build output? If you’re a sane person, probably not - so you’d fall for this too.
An automated tool might have more of a chance, but again it’s kinda hard to have one that runs on binaries. If it runs after build, it’s typically some input/output checker, which would not detect code like this.
It’s a hard problem and I think it has just demonstrated that the security of build-related infrastructure should be taken more seriously than it currently is.
Yes, exactly. Has to be added to the binary or after static analysis of the source. Adding it too the source would too easily risk discovery.
Even if I did have source write access, rather than adding the poison to the runnable code, I’d add the poison to code run at build time (a unit test) which modified the build tool chain and then removed all traces of the poison code again.
I was thinking something similar. My first comment on review or even looking at the code is why the exception is swallowed up without logging or a comment about why there is no logging.
That of course leads one to ask what this code is actually doing.
If we're at the level where we think it's an inside job, it doesn't seem that difficult to have 2 people on the inside "reviewing" each other's malicious commits.
For what it's worth, my org also has the same policy, but it's intended to catch mistakes, not to protect against malicious actors inside the company.
The vast majority of software shops don’t even consider insider threat in any meaningful way.
Imo it’s would be trivial to compromise many. Most companies have soft underbelly units like offshore maintenance engineering, tools teams and patching teams who don’t get a lot of meaningful oversight and can bypass many controls.
I mean, not even that. The cost to buy a software engineer and get them hired at the place you want to attack is really not that high. Once inside it’s generally possible to get things in (“the guild server was failing so I SSHed in and fixed it”).
Or have them compromised? In the spy movies they'd send a female agent to seduce and then blackmail the married high ranking official, a scenario that bachelor software developers probably dream of.
Do you code review the output of your build system? Imagine you had to write Java for a minute. Are you going to open up JAR files and look at strings to ensure no one is inserting code from your build server?
Or if Java is a bridge too far, are you inspecting your minified webpack output to ensure no one is inserting malicious Javascript?
> Unless this a compromise of the build machine ...
Assume compromise of the build machine. Start with, who builds the build machine, and how do they maintain it? Can humans get into it at all, such as in a break glass scenario?
Also, GitHub, GitLab, and the like, may not actually guarantee what you are relying on to enforce “the only way”.
Solarwinds accidentally leaked (via Github) the FTP credentials to the infrastructure used to distribute builds in late 2019 [1].
I'd be curious to see if the digitally signed bad versions are similar to digitally signed good versions, i.e. if there's any chance the attacker found/developed a hash collision against an otherwise legitimate build. AIUI it'd be a pretty big deal since it would point to a vulnerability in SHA-256 (which is usually how Windows binaries are signed), but this is apparently a nation state we're dealing with? ¯\_(ツ)_/¯
They did sign it with the key they found there, virus vendors detected fancy bear or such, customer support was in denial and recommended all customer to ignore this warning and disable scanning this binary, whitelist.
You don't need the GRU with such a company. Microsoft Defender would not help.
Even a 12 year old from mom's basement could have intruded the nuclear arsenal this way. The nation state allegation came from the stealth CC stuff they found. But apparently someone else also took the invitation via writable ftp.
Branch protection settings are also editable by someone. What's to say the attacker couldn't disable it or bypass it for this one commit? Also I don't think it's uncommon to have a way to get a hotfix deployed without going through the normal checks and balances. For those "the service is down" calls at 2 AM.
I've said this before, but I work on a team of 5 or 6 people. If I (pre covid) sent them a PR and walked over to their desk, told them it was super urgent and a tiny change just needed a rubber stamp, one of them would do it (and I would likely do the same for them). Failing that I can name a handful of developers that wouldn't be familiar with the system but will review my change because I did the same for them a few months ago (and they'll comment on stylistic/clarity issues, rather than the work being done). Even if you think this is rare, it likely isn't and likely happens at every company to some degree.
I've worked at a mediacorp in user authentication team where one rogue junior developer from another team (with the most seniority though at that company subdividion because everybody else left) went behind my back to pressure my junior colleague to merge a pr in our codebase which opened a security hole in the back end because he was working together with a project lead who promised to deliver something that we couldn't.
Exactly. Even if it didn't have a security issue, one developer going to a trusted dev on another team and saying "hey, it's the week before Christmas/9pm on a Friday night, nobody else is here, I really need this merge, it's low risk and can go out in the next rollout but QA need to check it this weekend/I'm off next week/<insert some business reason applicable here>" will often result in a thumbs up.
Code review and protected master are certainly important but not infallible. If I were a malware author with code running on an owned dev machine and my goal was to sneak code into a repo, I can think of a bunch of strategies that might increase the odds of slipping past a review.
Just running in the background, waiting to amend a big commit with many changed files/lines would probably go a long way. How often does a reviewer glaze over when reading through a diff where someone shuffled some modules around, causing a lot of line changes without any real implementation changes? Perfect opportunity to slip a few new lines into a long file amidst all the other changes.
As many are saying here, why commit the malware, just modify Jenkins to detect that it's compiling this DLL and add those lines into the source about to be compiled..
I offer up a glass of kool aid if you believe this works. Employee A is just going to DM slack Employee B asking them to approve their 7000 like PR because they're going on vacation next week and Employee C has been slow reviewing.
Huh? After the build server checks out the code, write malicious code to the source files directly just before the compilation step. No merge conflicts or breaking changes unless the added code failed to compile. It's reasonable to guess they had access to the source from the build server, so they could reproduce this environment themselves and test on their own.
It's not hard to envision code reviews that don't review every single line. Been on both sides of it. Code review isn't a security barrier, it's a (noisy) safety check. It can't even catch every silly bug, let alone deliberate covert sabotage.
Maybe, assuming it's all committed in one shot (rather than, say, a couple lines per review) and the reviewer scrolls through and glances at every chunk of code. It's some 4000 lines of code, so I wouldn't be surprised if at least one of those wasn't the case. (And all of this is assuming the reviewer was a typical one... if they were particularly known to be careless or compromised, then you don't need these either.)
P.S. Oh, one more thing: some workflows don't mandate a second review after the code is amended, so the initial code could be benign. And many don't require the merger to be the reviewer either. So this could've gotten in that way too.
Depending on what you rebase to and what the usual workflow is for developers. Those that rebase from remote frequently (guilty) won’t notice, but those that do merges might.
>In an interesting turn of events, the investigation of the whole SolarWinds compromise led to the discovery of an additional malware that also affects the SolarWinds Orion product but has been determined to be likely unrelated to this compromise and used by a different threat actor.
Either that one was used to compromise the supply chain (in which case it makes little to no sense to keep it around and risk detection), or at least 2 different groups had the chance to target sensitive US infrastructure.
Funny how media coverage of this issue misses no chance of mentioning Russia and nobody else, not even possible suspects.
I wonder what happens if the attackers notice each other on the compromised system. Do they get along in exfiltrating data or do they fight quietly?
> Funny how media coverage of this issue misses no chance of mentioning Russia and nobody else, not even possible suspects.
There are parts of the intelligence community that know with confidence who the true attacker is. Even if they had no idea they were being exploited, there are many ways to perform post-mortem analysis when you're, e.g., the NSA. So, someone has 100% confidence, or close to it.
In terms of what the media says: typically, they report on off-the-record remarks from officials and leaks. That's just how the game is played. It's an unfortunate byproduct of everyone wanting to tell, but nobody wanting to be caught telling. The value of Reuters and AP is that they typically do enough due diligence on their own sources to make sure that they're not just spouting nonsense. "Top of the food chain" sources like them are very regularly correct, but fallible.
The secretary of state has said as much, and pointed at Russia. Sure, he could be lying, but given the president's reflexive defense of Russia, that would be a weird lie to go with. If anything, it's an admission against interest, which strongly suggests to me that this is the assessment of the relevant security agencies.
Don't forget the "intelligence" community is paid to find Russian spooks hiding everywhere. The 2014 JP Morgan hack was blamed on Russian state backed hackers[1]. We know now that was pure speculation and not NSA inside knowledge, since some time later a small criminal gang were successfully prosecuted for it. Apparently they were running a pump-n-dump scheme.
1. https://eu.usatoday.com/story/tech/2014/08/28/russia-jpmorga...
> In terms of what the media says: typically, they report on off-the-record remarks from officials and leaks. That's just how the game is played.
This isn't how the game is supposed to be played and is a symptom of the erosion of the media's journalistic integrity. Anonymous sources can tell you where the bodies are buried, but you still need to dig up the bodies. One would think if you're going through all the trouble to track down three different sources who are both competent and trustworthy to comment on who the government suspects, that you'd take the opportunity to ask a follow up question like "why do you think it was them?" Yeah, everyone wants to be the first to break a story, and real investigation is a lot harder than tabloid journalism, but that's the job, or at least that's what it used to be.
And herein lies the problem, anyone who actually knows who it is, is not going to tell you how they know. The intelligence that was used to discover who the attacker, is much more valuable than the information of who the attacker is. The best you'd probably get is 'classified sources/methods/intelligence'.
But media can just add ", person X says" at the end of a sentence and then the burden of proof is no longer with them. They can report that "Obama is born in Kenya, President Trump claims" and, hey, they're reporting the true fact that Trump claimed something...
I would very much like to see prevention advice tacked on to analyses like these. It's very interesting to see how the vulnerabilities were exploited, but I think it would be extremely valuable to understand how to prevent future attacks such as this. What were the root causes of the vulnerability, and how can the community prevent similar ones from being created in the future?
I like comparing this type of unification with margin trading, in this case with 18000:1 leverage: if stocks go up, you get many multiples of the usual profits, but it takes only a tiny dip in the price to wipe out the account balance.
It can affect non-snake oil from good vendors who provide useful solutions for meaningful compliance tests, too, though. Or just any popular B2B software provider.
If one of the big 3 superpowers really wants to backdoor your product, then even a top-notch company might fall victim. Hopefully this increased awareness will make it harder to pull off these subtle compromises without it getting caught sooner, though.
I think "banned" is an overstatement. There were/are people obligated to use Windows because they need professional EDA tools and suchlike software only available for that platform. And although it's by no means my speciality -- I'm a distributed systems developer, not a Windows sysop -- I was told a few years ago that the general belief was that Windows had the most sound security story of all the operating systems you could put on a laptop. The reputation of Windows is a combination of history and the fact that idiots are likely to use it, so it always looks like the OS people do stupid things with. But if your company culture isn't stupid you can make it work much better.
NTFS was a file system built from the ground up for security compliance. Microsoft since about Windows XP SP2 has been taking security increasingly seriously, and they were willing to break a lot of software to enforce UAC in the Vista days and got nothing but hate for it.
Still regarding this specific hack the exploit was hidden in Orion's telemetry, showing that Microsoft's new love of telemetry isn't just privacy invading it's security degrading.
Any GitHub/GitLab/etc. employees here? I think you might be able to help mitigate some of these kinds of attacks:
> To have some minimal form of obfuscation from prying eyes, the strings in the backdoor are compressed and encoded in Base64, or their hashes are used instead.
There needs to be a quick tool that flags strings that appear to represent binary data before a merge, maybe even decoding them when possible and providing hints of what they might represent, especially inside source-code files. These shouldn't be common in checked code. And we should figure out a way to whitelist them in the repo that's both safe and convenient (I'm not sure how).
Is this a feature code-hosting sites like GitHub can add?
Hmm, interesting. I'm confused if that's how I should read this. You might be right. I assume you're referring to this paragraph?
> Evidence suggests that as early as October 2019, these attackers have been testing their ability to insert code by adding empty classes. Therefore, insertion of malicious code into the SolarWinds.Orion.Core.BusinessLayer.dll likely occurred at an early stage, before the final stages of the software build, which would include digitally signing the compiled code.
They talk about adding classes at an early stage... I assumed that meant modifying the source code in some fashion. Sounds like you took it to mean they inserted binary (MSIL?) code during the build? Or are you referring to a different paragraph that indicates this wasn't part of source control?
> Our initial investigations point to an issue in the Orion software build system in which the vulnerability was insert which, if present and activated, could potentially allow an attacker to compromise the server on which the Orion Platform products run.
Under "With these processes in place how was your code compromised?"
If the compromise was inserted during the build process, then one countermeasure could have been reproducible builds. Reproducible builds require the source code, but they can verify whether or not the build matches the claimed source code. That would work even after it was signed.
Oh interesting. So that would mean it would depend on a compromised build machine. Yeah, that would make sense, and I guess you couldn't prevent it like this in that case. Thanks!
If it was standard to compare a dependency graph of the program architecture pre and post build, then you would catch these sorts of things right away. Supply chain attacks are not new and I imagine at least some companies have a process like this in place already.
I can imagine it failing or succeeding depending on how intrusive the UX is. It's worth trying out I think, especially in an opt-in fashion. They can improve it gradually and people can leave it disabled if they don't like it.
Note that it shouldn't be an "action" on GitHub Actions, since those are specified in the repo itself, and can hence be removed in the same commit. It needs to be an external thing.
"Finally, the backdoor composes a JSON document into which it adds the unique user ID described earlier, a session ID, and a set of other non-relevant data fields. It then sends this JSON document to the C2 server."
Is there any further explanation of how this was achieved? One might expect as "par for the course" that all external connections be blocked aside from explicitly designated ranges. I would expect that an attempt at external comms would set off alarms.
I was wondering the same, if the compromise is of the Orion product which presumably isn't just sitting there with open access to the internet? Like this doesn't seem to be a very broad exploit that could start on a machine with outside world access (like a Windows exploit itself) and then pivot into more sensitive areas.
It's my understanding that multiple, unexplained NXDOMAIN responses IS what exposed the compromised systems. Why it didn't happen earlier (or immediately) is a good question.
Observation: "avsvmcloud.com" -- seems to be the one constant around which a whole bunch of other things, which are variables revolve... (oh sure, "appsync-api" also appears to be a constant -- but it exists at a far less important place in the URL).
"avsvmcloud.com" is far, far more important -- because ALL of the communications go there...
Now, it may be that "avsvmcloud.com" is a legitimate ISP, hosting provider or what-have-you...
But, if I were an investigator on this case, I know I'd want to track each and every place that these requests flow through whoever owns the "avsvmcloud.com" network...
I'd start with the idea that because a subdomain is being used, that the first thing that happens is that subdomains must be resolved by a DNS subdomain servers... so where exactly on whoever owns the "avsvmcloud.com" network, does that happen?
I'd even go so far as to audit, completely dissasemble, the DNS software that is running on those servers... Give it to as many security researchers as possible... What does it do? Where does it point to? What's on the other end of those IP addresses that it resolves to? Are there any anomalies in that IP address resolution? Specifically, when/where and how do they manifest? Are there any patterns there? Who owns the machines on the other side of those IP addresses?
Etc., etc.
In fact...
What would happen if someone were to run a machine learning algorithm on say a, let's be polite and call it a "challenged" DNS resolver?
Would it find some DNS resolution anomalies?
In fact, if I were an investigator, I'd go as far as to audit the whole chain of DNS resolvers / the DNS resolution process THOROUGHLY...
Anyone else find it ironic that the country that is responsible for democratizing access to scientific research (via support of SciHub), the country protecting a whistleblower against government overreach (Snowden), the country pointing out how fundamentally insecure closed source, proprietary software is (SolarWinds), is...Russia?
So you're saying that Russia's interests are entirely altrustic? That's quite a stretch. I don't think they hacked SolarWinds to "prove how insecure closed source, proprietary software is". Why don't they prove how "insecure" Kaspersky AV is, in that case? Seems strange to pick a software package that nobody's heard of, but happens to be used by thousands of juicy industrial espionage targets of their primary political enemy.
I have no idea what Russia's interests are. I'm just genuinely curious and in the dark. I've never been to the country and grew up in the West during the Cold War, so when I think of Russia I was indoctrinated to think of "the bad guys".
The country is an enigma to me—very good at math but from what I read a corrupt place with rule by power and not law. So yeah, I'm just genuinely asking, how are they the ones doing these things which make the world a better place, and why isn't it us leading the charge in these 3 issues? Maybe it is just a coincidence that those 3 things align with their selfish interests, and their is no altruism involved? Or maybe there's a group in Russia that loves the ideals of the USA, and is able to actually help implement those ideals from there, because if you tried to do those 3 things from here you would be thrown in prison due to some bad laws/people here?
Hum... Most of it seems sarcastic, but a simple antagonistic view towards the US explains a lot:
> protecting a whistleblower against government overreach
Becomes protecting an enemy of their enemy, and one that commands a high press influence.
> pointing out how fundamentally insecure closed source, proprietary software
Becomes just exploiting the enemy infrastructure.
The one thing that I'm not comfortable with a simple explanation of "the Russian government doesn't like the US" is their support of SciHub. It can explain the support quite well, but it is not the only simple explanation available, so there may be other reasons.
I vaguely recall that he had chosen somewhere else - but in the midst of the flights from HK(?) to other places, his passport was nulled by the US so he had to stay in Russ, could no longer board int'l flights or something (?)
> [Snowden] had been on his way from Hong Kong via Russia and Cuba to what he hoped would be sanctuary in Ecuador when the US cancelled his passport, leaving him stranded in Russia.
As a sibling comment notes; Snowden didn't get to choose much once their passport was revoked (made a non-person). You might recall that for a while (number of months?) they were stuck inside of the border-zone limbo of the International section of some airport in Russia.
Eventually Russia decided they'd allow the poor guy inside, presumably to snub the US.
I assume huge percentage of this site is IT professionals and software engineers.
I’ll have to ask then, what proof is there that Russia did this hack? Do you realise how hard it is to track professional hackers? You will have to trace the entire network commands up to a source and hope that it is registered under their name.
I genuinely cannot believe people here think that government managed to find the source of the hack in couple of days, that is just pure propaganda aimed for people with little to no knowledge about computer security.
Attribution is not from tracing connections or domain ownership, it's from looking at the coding style, the "Tactics, Techniques and Procedures" and the choice of targets.
It's a complex combination of all of those things, in addition to more "offensive" type intelligence collection (spying on GRU/SVR buildings, communications, and officers, essentially, and compromising their infrastructure).
You might be surprised about how even the world's top intelligence agencies sometimes do make simple mistakes with domain and network registration which really are just genuine fuckups rather than false flag subterfuge. This is very rarely a matter of something silly like "Russian IP = Russian intelligence" and more like sloppily re-using an ostensibly non-attributable network or nameserver they didn't realize was already burned.
We're still kind of in the infancy of cyberwarfare. Attribution will probably be harder in a few decades.
But, yes, it's generally a matter of TTPs, target selection, goal analysis, and style.
You can see it in Bellingcat's investigations - carelessly reusing burners, calling from GRU offices, reusing passports, calling from two burners one immediately following the other.
Yep, all enabled by the fact that Russia is so corrupt, anyone can pretty easily buy any data about anything on anyone. So any private citizen with a bit of money and some skills can effectively act like a para-intelligence agency, which is essentially what Bellingcat is.
For anyone curious, they have two excellent articles on this from a few days ago:
There was also an amazing investigation into this published yesterday by a Russian outlet, interviewing some of the black market data brokers and law enforcement officers (both of whom claim some of the brokers will be hunted and killed by the state, now):
That's just fancy technical terms to justify the propaganda. If these kinds of "hard proof" which definitively link hacks to nation state actors exist, why are they never publically revealed?
Might still be backed by old fashioned humint - maybe an asset in Russia told someone. If so, that might be trustworthy, but also needed to be kept secret. If I needed to publicize and justify such information, I might try to claim that "the coding of the exploit was consistent with Russian trade craft" or something like that...
Not really a new thing. The Soviets would point out the defects in US society as much as the US would highlight defects in the USSR.
And granting asylum to persons who have fallen out of favour with an enemy/rival has a been a thing back to at least the time of the Peloponnesian War and probably before.
I feel this is tongue in cheek but at the same time I've personally benefited from 2/3 of these already. I find that very interesting that I'm benefiting from the shit stirring that Russia is doing.
The handling of this whole event is in stark contrast to just a few years ago - when details on malicious activity were a closely guarded secret and useful threat data was safely locked away from anyone who might protect the public with it.
I wonder how much different this would be had it been a linux application running under apparmor or in a container environment... One would expect from a security perspective that all of these remotely distributed applications would be running under some kind of chroot jail or container to prevent the kind of exposure that is obviously happening here. I think Microsoft is a little complicit in their lack of security in their OS platform allowing these types of issues to proliferate repeatedly year after year with no real changes happening in the ways that applications are locked down.
Isn't the whole point to this that the targeted software is supposed to run at high privileges and is also supposed to phone home? So it's the ideal vector for an exfil attack. The only way to avoid it would be to do like Hillary and run your own email server with none of this cool stuff installed.
Any decent static code analysers should be able to detect things like this (catch all’s statements, base64 encoding etc), I am surprised none seem to be used for production code.
NOTE: The views I'm expressing here are solely my own and say nothing about my employers views.
I think that the tech industry has a severe code supply chain issue. Supply chains are a super hard problem with physical products (Raise a hand if you (a) have tried tracing the supply chain of cocoa (b) Can tell a midnight factory run of luxury clothes from a legitimate one (c) remember the supermicro controversy ) but with software we have the ability to do a much better job on solving it. I find it really disappointing that we have failed to do so. Reading through the comments here I've seen discussions on deterministic builds, code signing, and other practices. I think that they are parts of a unified whole, but all the pieces need to be there and need to be correctly done. Below I outline where I think the industry should be.
A complete, secure, code supply chain should do the following:
* Validate signatures on all 3rd party dependencies
* Ensure that all internally written code relies on signed commits
* Have builds be reproducible
* Sign the output of those builds
Taken together all of these form a complete supply chain that applies to both closed and open source software. There is nothing technically infeasible about implementing much of it as well - to me it feels like a culture issue.
The gap between where we seem to be and where we should be seems to be:
* Validate signatures on all 3rd party dependencies
** Present Day: Many vendors cannot be bothered to properly sign the outputs of their builds. Microsoft updates, openssh releases, and things like that remain the exception rather than the rule. This problem becomes even more egregious when looking at enterprise to enterprise products such as drivers which are either massive sets of source code or precompiled blobs, both of which run with lots of privileges in the context of the product they are integrated into. Even Fedora provides lots of packages from their Koji build system, the majority of which are not signed.
** Where we could be: Normalize signing these, and normalize validating the signatures prior to any use in a build environment. This is one of the easiest places in the supply chain to insert malware due to the lack of verifications.
* Ensure that all internally written code relies on signed commits
** Present Day: Outside of git, most VCS systems don't even support signed commits. Within git, signed commits are not popular. I personally blame the tooling. Signing is based on PGP keys which have all sorts of known issues with use, tooling, and a general disdain due to their initial use case for email being broken. Places like Github attempt features like mandatory signing, but that falls short. Keys are still sourced from unknown places, each developer is responsible for their own key, there is no support for validating prior commits once the signing key is rotated, and using the webui totally bypasses the signing requirements (https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/about-commit-signature-verification).
** Where we could be: Let's imagine a future where git is used as a VCS. Signing keys should be centrally controlled by an authority with developers issued code signing subkeys that are rotated and can be revoked by the central authority. By having a history of all code signing keys over time, the repository can also be audited at any point in time. Even if malicious insiders directly alter the VCS, it can be flagged! I lead a project to implement such a system at my work ( https://eos.arista.com/commit-signing-with-git-at-enterprise-scale/ ) which I am posting on every discussion here to try and normalize a discussion around how to do this at other companies.
* Have builds be reproducible
** Present Day: This is probably the biggest gap in having a secure supply chain. Builds today are not reproducible nor are they deterministic. The best which I know of is NixOS which is around 99% reproducible ( https://r13y.com/ ). Debian appears 95% on a specific target ( https://isdebianreproducibleyet.com/ ). Most other products are much lower than that.
** Where we could be: The first step would be deterministic builds, where building with the same inputs always results in the same outputs. Once you have a way to store what those inputs are, you can then reproduce builds later. Securing build environments becomes much easier at that point. You can build in multiple places, at multiple security levels, and check the same output comes out each time. You can even build at a much later point in time since you should have your whole set of dependencies clearly documented and saved. Validating outputs is super easy later on since you can recreate exactly what it should have been. This is also great for build systems in general since it makes dependency graphs more accurate and reduces problems with building in different environments. With the existence of VMs and containers, this is also a problem that should be super solvable. The devil is in the details here, but there should not be any reason it cannot be solved other than a lack of proper investment.
* Sign the output of those builds
** Where we are today: This is one of the items that is actually the most popular, since it is so easy to do. There are lots of methods to sign any sort of data and the tooling around them is pretty straightforward. By signing this data, it closes the loop on someone downstream validating that data as an input to their own system.
** Where we could be: Keeping up the good work and going further to normalize signing build system outputs!
I do work for a SolarWinds Customer. SolarWinds told us on Thursday that the certificate was going to be revoked on the 21st. Then yesterday they told us the certificate wasn't being revoked until February 2021.
This says to me that the certificate itself probably wasn't compromised. The attacker must have found a place in the CI pipeline where they could insert code and get it signed automatically.
I'd be surprised if signing was done automatically, that would be really bad. More likely it was done manually on a package that came out their build system, without anyone stroking their beard to wonder if that system had had its compiler replaced or its cache of dependencies poisoned.
I wouldn't refer to getting hacked as a scandal. Unless it comes out that there was some sort of coverup, it seems innapropriate to refer to it as a -gate.
It's a tad conspiratorial, but I suspect there was a concerted effort by media to muddy down the impacts of future watergate-level scandals by highlighting a lot of minor "gates". Already I feel tempted to roll my eyes each time I see another HN post about X-gate, seeing many "weaker" stories using the same nomenclature certainly steals power away from the stories that hit hard.
> To have some minimal form of obfuscation from prying eyes, the strings in the backdoor are compressed and encoded in Base64, or their hashes are used instead.
> In an interesting turn of events, the investigation of the whole SolarWinds compromise led to the discovery of an additional malware that also affects the SolarWinds Orion product but has been determined to be likely unrelated to this compromise and used by a different threat actor.