Reverse engineer how something like this was created and it is mind boggling. The initial intelligence gathering of the target systems, developing the plan of attack, recruit experts on the siemens hardware and physicists to explain the things that could go wrong, development and QA must have been grueling, since the expense of failure is so great! Never mind the deployment and monitoring to see if it was effective! They probably recreated the entire environment to test different ways to cause havoc.
Stuxnet recorded various data points while the cascades and centrifuges operated normally, in order to replay this data to operators once the sabotage began. They must have had a working system to test this on?! The budget for something like this is probably in the tens of millions if not more. The HR requirement must have been pretty large too. Analysts to gather information, managers, programmers, qa, siemens hardware experts, physicists, deployment, monitoring, etc, etc.
> The budget for something like this is probably in the tens of millions if not more.
Absolutely. This was a massive defense spending project by any measure. How many people do you think worked on it? Assuming the project was highly compartmentalized, I would estimate that there are at least SIX subteams currently working on the next Stuxnet.
- 0-Day exploitation of PCs. How big is the team responsible for discovering / purchasing 0-day exploits?
- Hardware/firmware-level infection. This would require expert knowledge of the specific control systems.
- Networking / infrastructure. This requires an intimate knowledge of target network topology.
- Spear-phishing payload delivery. Perhaps the points of entry were several levels removed from the actual target facility (e.g., security guards' wives' laptops).
- Testing / QA.
All of this of course has to be backed up by world-class intelligence support, which I shan't address further. The technical feats of developing this alone are astounding and intriguing.
> 0-Day exploitation of PCs. How big is the team responsible for discovering / purchasing 0-day exploits?
Given the speculation that it was the US behind Stuxnet, this one is a cheap and easy one. The US has been buying up ready-made exploits for a good while now (there's a reason that the likes of Raytheon are hiring exploit devs left and right) and have nice stockpiles of them just ready and waiting for the likes of Stuxnet.
This is true because you heard it's true, or because you know it's true? Raytheon definitely has a lot of people on staff who are at least peripherally involved in vuln dev. That's not the same thing as having a staff full of exploit developers. You get peripheral involvement in vuln dev just by doing malware reversing, which is pretty low on the food chain, and something the government definitely (firsthand) spends money on.
I can also confirm that Raytheon is building up this capability (although less so than Northrop and Lockheed).
If you're curious what companies are actually committing to vulnerability dev you can search any cleared jobs site for "offensive"; the companies that have listings are who you'd imagine them to be (minus a couple placement firms that just put people right at the Fort).
At least three different people I know are significantly involved in that area. You probably know some of them too. I detest them for the ethics of it, and keep my distance as a result, but there's no question what they do and where the money comes from.
But that would still make it quite a bargain compared to buying physical weapons systems (not to mention the greater denyability / diplomatic two-steps it enables).
Exactly "tens of millions" sounds like a lot, until you realize that's not that much. Most 50 employee companies could make an investment of 10 million if they really had to.
What's going to happen when the first Chinese/North Korean/... company succeeds at actually doing this ? When will we have the first startup doing it ? Startups are known for creativity, both in technical development and interpretation of the law, so why not ?
The cost for things like this needs to go up, by a lot, fast. Or we're going to be in a deep hole.
In the David Sanger article published in the Times attributing Stuxnet to the US/Israel, this bit really struck me -
"One day, toward the end of Mr. Bush’s term, the rubble of a centrifuge was spread out on the conference table in the Situation Room, proof of the potential power of a cyberweapon. The worm was declared ready to test against the real target: Iran’s underground enrichment plant."
And i don't mean to stray off Stuxnet here, but just really quickly: The chosen-prefix collision attack used in signing the Windows Update malware (FLAME) also suspected of being from the US was a never before published variant.
The computing power alone was on the order of $200k, and makes you wonder what else the NSA or the national labs have up their sleeves.
The chosen-prefix collision attack used in signing the Windows Update malware (FLAME) also suspected of being from the US was a never before published variant.
Is anyone aware of a somewhat comprehensive auto-update cryptography survey anywhere?
I am often alarmed by the number of updates pushed through desktop software, often with little explanation. (I'm looking at you, Adobe.) .. not just for security, but for bandwidth management too.
Many open source products seem to just query a URL and direct you to go download stuff. With SSL essentially broken, that's gotta be a bit risky vs. MITM.
Gentoo for one combines pre-distributed SHA256, SHA512 and Whirlpool checksums with file size, which feels secure enough against collisions. But the pre-distribution is decentralized through potential MITM (non-trusted parties), and the cryptography around that process - if any - is less than transparent, and integrity checking is apparently not made upon locally extracted package database.
Perhaps we need a standard, cross-platform solution in the software update query space that is cryptographically paranoid and well-reviewed enough by multiple parties to be considered secure, meets the generalised need and has some OS-level integration features more advanced than "secretly do things in the background".
> Many open source products seem to just query a URL and direct you to go download stuff. With SSL essentially broken, that's gotta be a bit risky vs. MITM.
There's nothing stopping one from linking against their own copy of an SSL lib, and supplying their own list of trust anchors/trusted CAs. I've been wondering for a while why lots of apps (e.g. mobile apps) don't do this more often.
I believe the best way to do it is something like ECDSA to verify and sign update packages - but I'm not familiar enough with the crypto field to understand how the entire mechanism works.
Sure, signatures are ideal. The problem for distribution maintainers, I guess, is that really they can't sign off on things; only the actual package developers can. Further, you'd wind up providing a key distribution service which may rapidly become more complex than the software packaging itself.
Given the above, perhaps all distribution maintainers can realistically do is say "it hasn't changed since I first saw it" which is what happens when they provide multiple checksums of a file, which is probably lower CPU and software library overhead than performing a cryptographic signature check.
"Just as easily"? With what budget? The United States has a crumbling infrastructure and is a few days away from massive, across-the-board funding cuts that will touch all corners of government, including defense. Sabotage is not only the best priced solution, it's the only soultion the country can afford in the current political climate. The US is trying to wind down its wars, not start new ones.
They might also think about bombing the US because they don't want an "atheist capitalist state" to have nuclear power. This kind of non-argument can go both ways.
Only if you think that a free liberal state like the US, and a theocracy like Iran where (for instance) they hang people for being gay is in any way morally equivalent.
That's the kind of thing that ensures that an Islamic ally with the MAD principle kicks in and fights back. This is ignoring the side effect that any damaging nuke would send fallout over the entire middle east, including Israel and other allies.
I was under the impression that the US started this war with their questionable military efforts abroad?
Pretty sure the US armed a lot of the terrorists in the first place. When you suck the resources out of the world and push people into starvation whilst living in the land of plenty, you're obviously going to become a target.
There are millions of Muslims representing hundreds of view points.
"All Christians are homophobic retards." What? There's nothing wrong with that statement. It describes those Westboro morons and therefore can be extrapolated to every Christian, right?
> They must have had a working system to test this on?!
The speculation is that Stuxnet was tested on P-1 centrifuges that the US acquired when Libya dismantled its nuclear program, set up in Israel's nuclear arms facility in Dimona. [1]
We cannot begin to imagine the extent to which world military powers are currently developing and deploying cyberweapons.
Given the success of Stuxnet, it's nearly certain that such offensive cyberwarfare programs have gotten increased funding and support from the highest levels of command. From the article, Stuxnet 0.5 C&C servers first went online in 2005. 2005! George W. Bush ordered the deployment of Stuxnet!
I personally cannot wait to hear about what the cyberweapons fo 2013 look like.
This is perhaps true, yet to the same extent that it would be true of any other secret weapon's deployment.
For example, let us consider the development and deployment of Stuxnet to be analogous to a miniature Manhattan Project. What proportion of the physicists and engineers in the Manhattan Project do you think were aware that they were building a large bomb? I would guess as much as 20% of the personnel directly involved with development knew what the project was. This includes the "integrators" - project managers and people in similar roles that need to know how different pieces fit together. I imagine the same is true of Stuxnet.
"The 2007 variant resolves that mystery by making it clear that the 417 attack code had at one time been fully complete and enabled before the attackers disabled it in later versions of the weapon."
The thing that struck me most was the use of the word "weapon"[1]. Jeff Moss warned in his 2011 BlackHat opening speech that blurring the line between cyberwarfare and actual warfare is inevitable. Wired's use of "weapon" here signifies that shift, and really reinforces the fact that each one of us who is writing software may play a part in cyber wars, even if inadvertently.
[1]It may have been an unintentional use of "weapon," as Stuxnet is referred to as a "cyberweapon" throughout the article, but the point that we are moving towards describing cyber warfare as actual warfare still stands.
* a pseudofile that resides in memory
* use standard file functions
* cannot be larger than 424 bytes when sent between computers
* can broadcast messages within a domain
Mailslots are an SMB-based IPC mechanism that dates back to Microsoft LanManager (LANMAN).
I could see using mailslots as a mechanism to disguise traffic and potentially thwart NIDS. SMB broadcast traffic is considered "noise" by a lot of admins and might well be excluded from traffic monitoring to prevent "chatty" traffic from filling the logs. Using mailslots, as opposed to rolling a custom broadcast-based protocol, makes the traffic sink into the normal SMB noise floor.
There have been vulnerabilities found in the code handling mailslots, but the protocol itself is just a mechanism to do broadcast-based communication. It's old and crufty, dating back to the DOS LanManager days, but I'm sure there are applications out there that still rely on the functionality and, as is typical for Microsoft, the API still exists in modern Windows versions. (The NetBIOS "Browse List" functionality that powered "Network Neighborhood" uses this protocol, for example.)
I'm wondering if government agencies like the CIA, NSA and their counterparts in other countries look for vulnerabilities in programs but never report them to the vendors for fixing but instead catalog them for possible use in future exploits.
(actually, I'm not really wondering, it's probably naive to assume they wouldn't)
On the third page of the article, there's a screenshot of the fake company website where the command and control servers resided, set up by the CIA/whoever back in 2006.
Today, if you search for the specific phrases used in the navigation bar, Google returns only 3 websites:
Sadly, these sites just look spammy rather than fake sites set up by the CIA (and Alexa shows some SEO work has been done.... but that could be part of the facade).
Still, fishing for CIA CNC servers sounds like a fun game, they must be out there today. Anyone have any ideas how to find them?
Follow the malware. Dan Danchev [1] used to be quite forthcoming with his analysis until he wasn't anymore. If you set up a malware aquarium [2] you can see the C&C servers these things use. Although not all malware reproduces in captivity.
The most amazing thing about stuxnet is that if hollywood were to make a movie about it we would find it too unrealistic, even if it was less fantastic than the real facts.
We would find it unrealistic because Hollywood would get the details wrong. Encryption would be portrayed as wiggly squares on a screen. "Port scanning" would be confused with "hacking."
The example I gave to a politically minded friend: Imagine a political drama with dialog like this:
"We've found a bug in the parliamentary procedure! Call the senator!"
"Oh no! Quick, we've got to omnibus the filibuster before the cloture overflows and the whole bill crashes!"
I wonder if such weapons have already been directed against our advanced fighters, ships, and submarines.
I remember reading about the COTS (Commercial Off the Shelf) program in the late 90's and the use of Windows NT 4 on AEGIS vessels. Supposedly, there was a protocol for rebooting everything, every two weeks. Hopefully, nothing critical would be down the moment there was an attack. (To be fair, the NT4 kernel is rock solid, so long as you leave it unmolested, which Microsoft didn't.)
Well nothing works forever on a warship anyways, and the Navy is already very big on Preventative Maintenance (i.e. "fix it until it's broke"). So any plan assuming that a system will stay up for an entire deployment is negligent from the start; you might as well practice having to reboot the system from that perspective.
My understanding, and my experience from working with NT4 machines back then, is that you had to reboot them every so often. It wasn't just a matter of practicing rebooting.
Sure, I'm just saying that was (and is) par for the course already for the Navy. It would be like complaining that the software comes in a ugly box... even if it came in a nice box, the Navy would just throw the box away and stuff it in an ugly box anyways.
And I'm saying that the boxes wouldn't insist on getting ugly or most of the equipment wouldn't insist on preventive maintenance in the middle of an attack. The NT4 boxes might well have insisted on being rebooted at an in-opportune moment.
> The NT4 boxes might well have insisted on being rebooted at an in-opportune moment.
Well I've been on a boat that used NT4 for stupid office tasks and HP-UX somewhere in the actual Combat Control System.
Guess which one shit the bed in the middle of our graded inspection when we were supposed to be tracking a simulated enemy in a life-or-death situation? (Hint: MS didn't write the OS).
To rephrase it a bit, there are vanishingly few pieces of gear that the Navy assumes that must work in the middle of an attack, and most of the pieces that do fall under that assumption have manual overrides/backups/inherent redundancy/etc. In our situation, we switched over to paper-based methods and managed to keep the contact situation until the system could be rebooted.
So if the Navy builds a ship that is single-point-of-failure on any commodity-OS-driven computer they deserve what they get. We've known since before WWII that survivability in combat requires redundancy.
Honestly on a surface ship I can't think of very many 1-hit-kill components. Even in WWII tiny little Destroyer Escorts were able to withstand multiple shell hits from Japanese Heavy Cruisers and even the Yamato. (The Battle off Samar, if you want to wiki it).
With the move toward computerization and long-range missile-based combat there's probably a lot of risk with the Fire Control System, Radars (e.g. AEGIS), stuff like that. But even blind you can at least run away, and the CIWS has an independent fire-control radar for last-resort self-defense.
Submarines are more problematic. There's only the one pressure hull, only the one reactor, only the one main propulsion train, and watertight compartmentalization only exists for the reactor compartment.
This makes everything about subs more expensive since all work that affects these things has to be formally controlled and QA'ed, re-tested, etc. to avoid losing more subs like we lost Thresher and Scorpion.
I think it was Win 95/98 that would reboot after 49 days. Not too many people got to experience that. I think NT4 might have reset the uptime counter after the same period.
Am I missing something or had stuxnet started development before any of the centrifuges were installed? Was there perhaps an even larger game afoot which led Iran to choose certain hardware in the first place?
I suppose development of the software could have started without knowing which PLC's it would target eventually, but that seems doubtful to me. Of course, the easiest explanation is that I'm missing something in the timeline.
I remember when the "NSA" variable name was found in Windows source code that accidentally leaked out. Some people claimed that the NSA had backdoors into Windows and nearly everybody singed happily: "Conspiracy theorists".
I'm not so sure that nowadays with all this Stuxnet insight people would be so hard-pressed to label these people conspiracy theorists.
Also, no more Windows source code did leak out with all the comments and variable names in the clear etc.
One has to wonder how "open" Windows actually is to the NSA and if all these 0-days so commonly found are really honest mistakes or not...
Stuxnet recorded various data points while the cascades and centrifuges operated normally, in order to replay this data to operators once the sabotage began. They must have had a working system to test this on?! The budget for something like this is probably in the tens of millions if not more. The HR requirement must have been pretty large too. Analysts to gather information, managers, programmers, qa, siemens hardware experts, physicists, deployment, monitoring, etc, etc.