> 7. The stolen info is sent out by infecting USB sticks that are used in an infected machine and copying an encrypted SQLLite database to the sticks, to be sent when they are used outside of the closed environment. This way data can be exfiltrated even from a high-security environment with no network connectivity.
> "Agent.BTZ did something like this already in 2008. Flame is lame."
Flame's approach is different and more impressive. Agent.BTZ copied itself and used an easy-to-discover autorun.inf file in the root directory of attached disks or network shares. Flame exports its database by encrypting it and then writing it to the USB disk as a file called '.' (just a period, meaning 'current directory')
When you run a directory listing you can't see it. You can't open it. The windows API doesn't allow you to create a file with that name and Flame accomplishes this by opening the disk as a raw device and directly writing to the FAT partition. Impressive, right.
While a lot of these individual features alone are not impressive the sum of the parts, combined with the collision attack on the certificate signature are very impressive.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Nobody knew about it for years, yet it was derided when discovered and documented.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Infosec is an inherently pessimistic enterprise, although spending time here makes me think it's not a perspective limited to security.
Just look at how almost every post here ends up littered with comments like "This isn't new. My XYZ already does all of this." People like to feel superior (it helps reinforce the individual nerd exceptionalism)
> This isn't new. My XYZ already does all of this.
This is exactly the attitude used by some negative minded mediocre people to demotivate free thinkers. To be fair, to most of them it probably also doesn't seem new in reality, because their grey cells lack the sophistication required to understand the difference.
I think it is more a case of the public information security field moving from a small, highly technical niche of hackers (example list[1]) to a mainstream career path that openly trains and employs thousands of people that may not have a traditional “hacker mind-set”[2][3].
BIFF[4] is still remembered by many within the early niche group of hackers. It is likely that similar psychology has been driving the ridicule at hacker conferences in recent years towards mainstream reporting on “cyber” topics and use of buzz phases such as “Advanced Persistent Threat”.
>I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
People are unsure as to why it has such a large file size (do we know why yet?). One very common explanation is that it is bloated because of poor software engineering, some of the people that believe this explanation attempt to fit the facts to that narrative.
Also Consider the culture of the demo scene/exploit writers. The smaller code the better the programmer.
Personally I like to think that the flame authors intentionally exploited this prejudice and made it large so that: 1. it wouldn't look like malware, 2. if it was discovered no one would take it seriously and look deeper, 3. reverse engineering it would be complicated by it's large file size (cost > benefit from an AV perspective).
The same question was asked of Stuxnet; the answer is probably boring: state-sponsored malware authors are not like demo scene writers and do not care if their code is particularly elegant. They probably care more that it's J2EE-style maintainable.
And IMO (coming from a demoscener who's dabbled in malware dev for fun), they made the right choice. Sure, you pack Flame down and cut out everything non-essential, and you get it down to 64k. Good luck trying to add a new exploit later, once your target has adapted to your old ways. The goal of Flame and Stuxnet is not to be elegant or small or academically interesting (though I believe they are). The goal is to deliver a payload to their target in the most consistent way; they seem to be pretty damn dead on in hitting that goal.
Software that appears to be packed/obfuscated throws up red flags.
Rather than attempting to look like some badass in leather, flame/stuxnet dresses in a cheap ill-fitting suit with a bad microsoft tie so no one will suspect it.
It is worth noting that the 8.3 short name of the hidden file was HUB001.DAT[1]. This is because VFAT allows the specification of both a short name (8.3) and long name (LFN) for each file/directory.
You can find 8.3 '.' entry names by searching a partition for \x2e\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
A file with an LFN of '.' could be found with (hopefully this is correct) \x00\x2e\x00\x00\x00\x00\xff\xff\xff\xff\x0f
It appears as if 8.3 file names starting with '.' are treated specially but LFNs starting with '.' carry no significant meaning.
I struggled to find references to other malware that has used a similar approach. Does anyone have more information?
Surely Windows does not attempt to automatically execute files with a LFN (UTF-16 name) of '.'?
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Security folks often lack development experience, specifically in products that ship, to appreciate the big picture. This is why certain people on HN were so fixated on a lack of code obfuscation to give credit to the massive QA effort behind making all of stuxnet work on such a complex target.
I say this as a security person who has previously done dev on product teams.
I think tptacek has hummed a few bars in this direction before, but it has become received wisdom on some parts of the Internet that geeks vs. government is an asymmetric fight and that since governments are stupid geeks will win. You often see this in, let me cherry pick out of charitability, threads suggesting that the OSS community develop surveillance countermeasures for use by dissidents subject to certifiably evil regimes.
It doesn't really matter whether the nation state in question is Iran or the United States. Do not pick fights with people who can respond to a hacking incident by writing a check for $5 million dollars to a defense contractor and consider that low-intensity conflict resolution. It will not end well.
I agree with this, and I was similarly amazed when some folks calling themselves Anonymous started dumping personal information about law enforcement officers onto the web, you don't pick fights with people who are going to hunt you down and eliminate you. It may be they will just lock you up, it may be worse than that.
That said, there is a history of people who have done that, paid the ultimate price, and later been honored for their sacrifice. Seems like history can go either way sometimes in judging the act, hero or idiot.
I'm all in favor of people keeping their eyes open though as they walk into it.
For me things like Flame just tell me that what I knew as an engineer could be done, actually have been done. And that is always a bit of a wake up call.
The idea that governments are stupid in the sphere of technology ignores the recent-ish discovery on behalf of governments that if you just pay a bunch of geeks to be geeks on behalf of the (evil or otherwise) government and make them exempt from bureaucracy they get results that are at least as good as geeks acting independently.
> You often see this in, let me cherry pick out of charitability, threads suggesting that the OSS community develop surveillance countermeasures for use by dissidents subject to certifiably evil regimes.
> It doesn't really matter whether the nation state in question is Iran or the United States. Do not pick fights with people who can respond to a hacking incident by writing a check for $5 million dollars to a defense contractor and consider that low-intensity conflict resolution. It will not end well.
Are you really saying that people should avoid writing software that could help people who are subject to evil regimes because said evil regime might be upset at them? There's an uncertain level of personal risk associated with doing such things, but there's definite moral hazard in total self-interest.
Either way, if Flame was written by the US or Israel a lot of us on here are already complicit in such a project. We live in a democracy. Those are our tax dollars, hard at work.
I totally agree with you otherwise; governments are not stupid.
There's no personal risk to writing regime circumvention tools. Iran isn't going to have you assassinated for your work on Tor.
There is serious risk to using Tor in Iran. Death squads and disappearances aren't a conspiracy theory in Iran; they are the regime's well-understood M.O. When circumvention tools like Tor work, they hide your traffic from the regime. When they stop working, or are turned, they do exactly the opposite: they attach a statistical marker to your traffic that says "whether or not you can read these packets, the person sending them is interesting".
The people working on circumvention tools are mostly well-intentioned (many of them are friends of mine), but they are delusional about the SWOT analysis at play here. None of them have any unique skills that aren't available to an organization willing to shell out 6-7 figures to a team in a month. Money buys competence. A lot of money buys a lot of competence. Iran has a lot of money. Circumvention projects do not.
Kickstarter hasn't seen the amount of money that a world government could spend without director-level approval on a project to turn a circumvention tool against its users.
And that's before you get to the fact that many, if not most, of the computers in authoritarian regimes are probably already rootkitted.
While I agree with most of your post, I do have to take issue with this statement:
"There is serious risk to using Tor in Iran."
While there certainly is a chance of getting in trouble for using Tor, I wouldn't classify it as "serious risk." The government in Iran faces a situation w.r.t. filter circumvention similar to what the US faces when cracking down on illegal file-sharers. From my (admittedly limited) experience in Tehran last year, anyone with even a little computer know-how will have either some proxy service or Tor installed on their computers. The more knowledgeable ones have their own VPNs. Most use it to get through to Facebook and chat with their friends. It would be impossible to persecute everyone who's used circumvention tools without emptying half of Tehran.
The government certainly doesn't shy away from the measures you mentioned, but they generally go after for more grievous "offenses" than browsing the internet through Tor. Being gay, for example.
All the competence in the world won't let you break basic crypto algorithms without at least breaking a sweat.
The playing field between Alice and Bob on the one hand and Eve on the other hand is inherently asymmetrical. Given equal competence and time to work on it, Alice and Bob are going to come up with an encryption scheme that Eve won't be able to break. You seem to be convinced that given almost unlimited resources, Eve can break any scheme Alice and Bob can come up with. I'm not sure I see any evidence for that.
In some cases, Eve is willing to arrest/maim/kill anyone caught using Bob's encryption scheme. Eve has the ability to control at least some of the intermediary systems. Eve doesn't need to specifically break the scheme, just be able to figure out who's using it so she can go apply some lead pipe cryptanalysis [0].
I don't think he's saying that at all. I interpreted it as, given unlimited resources, Eve can determine that Alice and Bob are communicating over encrypted channels which, for Alice and Bob, is almost as bad as having their encryption broken.
I took that to be a specific example -- Tor may be detected using traffic analysis -- of a more general principle -- circumvention tools can not hope to withstand nearly unlimited resources. I thought tptacek was pretty explicit in making this more general statement.
One thing that a lot of circumvention tool promoters get wrong is the threat model. The threat model isn't "attacker can read your traffic" --- although some of the best known circumvention tools have made cryptographic mistakes that did allow that. The threat model is "tractable attacks that isolate traffic using your tool from bulk Internet traffic".
A torture cell will do just peachy at decrypting the actual packets.
Are you really saying that people should avoid writing software that could help people who are subject to evil regimes because said evil regime might be upset at them?
No, I'm saying that "my software helps people who are subject to evil regimes" is approximately as irresponsible as "my homeopathic remedy solves cancer" except in this case cancer has essentially infinite computational resources, arbitrarily high numbers of very savvy domain experts, and an army. Any hacker who believes their software, or their community's software, will hold up to dedicated adversarial interest from a nation-state is dangerously delusional.
I don't think it's as simple as you make it sound.
If somebody writes a tool that helps 100 million Chinese people access the unfiltered internet, a percentage of them will be caught and punished in devastating and inhumane ways. Some fraction of the illicit traffic will be blocked and the holes sealed up.
The remaining people will have access to material that, as far as the Chinese government is concerned, poses a tremendous risk to the state's continued authority. If this - as the state obviously believes - would help speed along the atrophy of an authoritarian state, net human suffering would be reduced overall.
I see what you mean and mostly agree but not with the example you chose: Chinese government never intended to completely block sensitive content. If they wanted so they would use other technologies. Any Chinese netizen with a VPN can go outside and many of them do. What the Chinese government do, successfully, is to make the sensitive content slightly harder to access, compared to local "safe" content. Then, like the water choosing the downward slope, most information consumed by Chinese netizens is inside the GFW.
I think a bigger asymmetry than "hackers vs. governments" is "defense vs. offense" -- the state of computer security is laughable enough that the attacker will probably win, whoever he is.
If non-government hackers were building offensive tools, vs. defensive, and only had to win periodically vs. essentially all the time, they'd be able to put up a better fight. Government doesn't have a particular monopoly on competence, and internal politics and budget issues probably would allow a relatively capital-poor non-governmental enterprise to do pretty well vs. a contractor/government team.
Similarly, I'll grant that the fight may appear asymmetrical at the higher level of organizations if one side is run by geeks and the other side is run by non-geeks. Nonetheless one should recognize that those hired by either side to be on the "front lines" so to speak will be geeks. Whether they're computer geeks or gun|bomb|espionage|etc geeks, assuming an asymmetry in the ability to employ strategy and accomplish goals on that level would be unwise.
Right. It always comes back to the people performing the task. Because people are generally "good" (IMHO), "evil" isn't particularly easy to get away with. This goes for geeks, police and even the military. Commands can be issued, but real people with real emotions have to deliver. This is why whistle-blower protection is so key to our economy and society.
And while politics may attract a disproportionate level of narcissists and sociopaths, I'm guessing CS doesn't.
I'd rather assume that sociopaths are drawn to politics because it's kind of a wildcard field (you can pretend to be expert and meddle in pretty much every topic).
That doesn't mean there aren't highly specialized and capable sociopaths out there.
Neither Stuxnet nor Flame target hackers or even the general population. They were targeting specific institutions. Attacking is easy once you know who or what to attack.
Each individual hacker and each individual citizen is a much smaller target. Sure, as soon as you're identified, you're toast: they break in and install malware on your computer -- if you're lucky. But there's a lot of hackers and even more normal people, all of which can be made individually harder to identify through smart software.
Lots of people. Relatively few pieces of software. Pit against an avalanche of money and access to the best talent in the world. The incentive structure doesn't work, at all.
Don't build circumvention tools. If you're lucky, they'll just turn out to be useless.
So when you mention black choppers, people will assume you are either being ironic or crazy. There's a particular policy goal affiliated with these attacks, and a spectrum of options for achieving that policy goal. Those options included "There exist certain individually identifiable employees of a foreign government who are personally indispensable to implementing something which goes against our policy goals. We could assassinate them."
If you read the papers you know that that option is neither a joke nor the fevered imaginings of a paranoid conspiracy theorist.
It is likely that such assassinations are against both International Law (Geneva Conventions) and US law. Carter, Ford and Reagan all prohibited them through executive orders. Even if you think that knocking off some Hamas leaders is a good idea, you will experience scope creep - last year a (sort of) influential policy advisor suggested using drones against Julian Assange.
Honestly, "we" (the West) are best armed, best funded, most free peoples ever to walk the earth. If we cannot put aside assassination and torture, then the human race has no hope.
Here for an interesting review of legality of such assassinations:
edit: for clarity, before I become flamed as a woolly left winger, I think that there are many people in the world, that if they were hit by lightning today, the world would be much better off. but
a) I think the world is nett worse off if democracies 'arrange' that lightning, because it demeans the important point of a democracy - being a beacon of hope for the future generations.
b) the choice of targets, is not discussed in a democratic manner, and almost certainly would not be my choice. (Now thats a referendum I would love to see:-)
c) my guess is that, like crime, taking out the people committing the crime right now, magically someone else steps into their place. Sometimes someone who yesterday was not committing those crimes.
If we cannot put aside assassination and torture,
then the human race has no hope.
We can and should put aside torture. Assassination, however, is still a preferred tool, when often an alternative is a larger scale military conflict. A focused attack has a better chance of avoiding hitting innocent bystanders.
important point of a democracy - being a beacon of hope
for the future generations
I do not think this is a point of "a democracy" at all.
the choice of targets, is not discussed in a democratic manner,
and almost certainly would not be my choice.
This is a serious point. And it arises in any military conflict. How the democratic public controls its military is a matter of serious study; I am not competent in this, but perhaps someone could suggest a few links?
It seems pretty clear that the US public has democratically decided to delegate the commission of war crimes to its military and not be told about the details.
A weird phenomenon that I have observed is the general public does not seem to consider killings that incorporate the use of aircraft to be assassinations.
Drone attacks, Apache missile strikes, and even dropping Navy SEALS on people with helicopters all seem to fall into some sort of "standard act of war" category when talked about in public. If you even merely refer to these things as assassinations you are written off as trying to use exaggerated or at least loaded language.
It's almost like people think "assassinations" are limited to snipers and James Bond figures breaking into your hotel room and making it look like a suicide.
That's because only the Good Guys(tm) have Hellfires, Apaches and Tomahawks to fire at will. The Good Guys also have restraint - the US won't send a cruise missile into an apartment complex in Islamabad, for instance - and only attack people who are both in a war zone and lack public support. The scary thing about covert assassinations is that they could happen anywhere and be committed by anyone (IMO the "anyone" part is the key bit for differentiating "war" and "murder"), whereas large-scale strikes are carried out by political figures who are (presumably) accountable for their actions.
Nice theory. Please try to convince people like Mostafa Ahmadi-Roshan and Massoud Ali-Mohammadi that this theory is true.
Oh, you can't. They were assassinated by a western-backed democracy. (There have been many more, those were just the first two names that turned up of Iranian nuclear scientists who were assassinated by Israel.)
As much as you'd like to believe that assassination does not happen, it does. In the very same conflict that gave us Flame and Stuxnet. In fact I would not be surprised if information from Flame was used to target assassinations.
If it were done using a hellfire missile from a drone would that be called an illegal "assassination"? The idea that assassinations are more illegal or immoral than -- say -- firing hellfire missiles at suspicious looking people in countries we don't care for is pretty silly.
That said, I'm sure that if the US was involved there was some kind of backwards hoop jumping so as not to technically break the law. "I just had a chat with him on a park bench when we bumped into each other. He said he'd look into it."
The point in the first paragraph regarding 'scope creep' is a non sequitur because Julian Assange was in fact not assassinated and public figures calling for his assassination were not taken seriously.
The second paragraph and points a and c are non sequiturs.
down modding stuff is fine, giving reasoned comments is fantastic.
lynch mob - no, but I was struck with concern and amusement by the idea of a quarterly referendum on which world figures we should target for assassination, plus maybe a limit of civilian children whose collateral death would be acceptable in the voting list. In fact its the opposite of a lynch mob. A lynch democracy perhaps.
don't quite understand the non-sequiteur part... could you expand?
The thing I find most interesting about Flame: whoever developed it surely understood that by being released into the wild like this, their new cryptographic attack was guaranteed to eventually be discovered and analyzed. And yet they spent that attack's secrecy on a (very sophisticated, but still) fishing expedition.
So what cryptanalytical capabilities do they have which are considered too sensitive to expose via malware?
Bear in mind that attacks on MD5 have an inherently limited shelf life, and that while the exploit used in Flame may be new, the underlying vulnerability and the fundamental technique used to exploit it are very well known.
Think about it this way. Flame was designed not to spread automatically, only when it was told to, meaning that a targeted attack like this would be difficult to discover since it affected a relatively few amount of computers compared to a virus designed to propagate at every opportunity, as well as the fact that fishing expedition was limited to only persons the owner were interested in.
Combine this with the fact that we're now dating the creation of the virus to at latest summer 2008 [1], and you've got a sophisticated surveillance mechanism that has been installed on thousands of computers and evaded detection for at least 5 years.
I'm sure there's lots more tricks that advanced virus authors like this have up their sleeve, but they're only useful to someone if they actually get used, and this seems to have paid off for whoever was behind this.
Now I'm wondering if the submitter wrote a good title and some mod came along and bawdlerised it??
It's a terrible title. Fine for a media site trying to sell stories based on sensationalism but I thought we were building a brave new online community here.
Access to signing keys is very relevant, and I think there is a very real chance (p>0.2) that the huge oversight MS did with the terminal server keys happened because they were ordered to do it.
That's an awfully baroque government backdoor --- a misconfigured X.509 attribute on a certificate that turns out to be signed with a hash for which controlled collisions turn out to be feasible.
The source code to Windows doesn't matter. In 2012, even teenagers find vulnerabilities in Windows by reading the assembly code in IDA Pro. It's one of the most comprehensively reverse-engineered pieces of code on the planet.
mtgx theorized it would be easier with access to the Windows source, and it would, and you know it would, and you're purposely being obtuse; arguing for the sake of arguing.
There is a reason we have programming languages, and we don't all write directly in machine code. Just because it's techincally possible to do things in a more difficult way doesn't mean they would be done that way with a faster, easier option readily available.
Sure, it'd be "easier" in a strictly literal sense. Just as it'd be easier to write these exploits on a system a 60" monitor.
The point is there's not a /significant/ difference; understanding code at a high level doesn't really help to attack it (it can make it harder, since the edges that you look for to exploit are precisely those parts you try and abstract away in a higher-level language), and the windows codebase is well-understood with lots of publicly available information describing in, even without the source code.
How well do you understand the Windows Update MITM issue? Take a stab at explaining exactly what part of it would have been easier with Windows source code?
I don't think so. I won't go on record with what I know but I do know that Microsoft has consented to allow governments audit, review, and sometimes even modify their source code.
I suspect a polite request (perhaps backed by a threat including the L-word) will get them far further than a virus.
Edit: Others have pointed to public documentation of this program. I believe the two cases I was aware of at the time were the governments of China and Germany.
Enterprises w/ 10k+ seats, OEMs, MVPs and governments can get access to Windows source these days. Microsoft launched the program in 2006 or so to dampen the "Linux is more secure because we can see the source!!" FUD.
Government Security Program: Addressing the unique security requirements of governments worldwide by helping government actively participate in ensuring the security of their critical systems. We help enhance system security by providing access to Microsoft Windows and Office source code, prescriptive and authoritative security guidance, technical training, security information, and Microsoft security experts.
The Common Criteria Evaluation and Validation Scheme (CCEVS) is a form of software accreditation that focuses on software security.
You can view the results for the Windows 7 accreditation at [1]. The website also has comprehensive documentation on the methodology used to accredit the software (including visibility of the source code).
To paraphrase Edison, anything worthwhile is 99% perspiration and 1% inspiration. The novel md5 collision/windows update propagation is Flame's 1%. The rest is just what's made possible as a result.
It is a cogent reminder of the fragility of the Internet's security infrastructure.
>9. Latest research proves that Flame is indeed linked to Stuxnet....
Whats the chance that this "Resource 207" is some 3rd party module that more than 1 developer had access to?
I concede that placing the same resource in the same resource location in 2 different unrelated applications is a bit of a long shot, but I dont see it as a smoking gun either.
>> Nobody knew about it for years, yet it was derided when discovered and documented.
I had the same reaction, then I thought they did this on purpose to downplay how really impressive Flame is. I imagine the people writing these blogs are actually thinking "Holy S%$&!" behind closed doors or within other security circles.
Flame caught all the attention because it made use of a new hash collision technique currently not know to anyone. Which would mean it was govt. backed.
FWIW (I didn't realize this either until the end), the article is actually pointing out that flame isn't lame for one very specific reason: the cutting edge cryptography research that went into it.
No. The article is pointing out that Flame isn't lame in lots of different ways, and saying that the naysayers kept calling it lame until one single spectacular bit of non-lameness came to light. It's suggesting that they should have cottoned on sooner. At least, that's my reading of it.
I thought the big thing was that flame found itself into a network locked out from the outside world. So the intrigue lies in how it made it into this network ?
I hadn't heard about that (so please excuse my ignorance), but surely it's just a case of somebody accidentally/intentionally bringing it in on removable storage, like a USB drive, and plugging it in.
yes, the article says that it will put itself onto USB sticks so that it can be transferred out of a walled garden, therefore it can enter the same way.
Not only duplicating itself but putting data from inside the airgapped network on to pen drives so it can escape. I guess the one is useless without the other but this somehow seems far more impressive.
> "Agent.BTZ did something like this already in 2008. Flame is lame."
Flame's approach is different and more impressive. Agent.BTZ copied itself and used an easy-to-discover autorun.inf file in the root directory of attached disks or network shares. Flame exports its database by encrypting it and then writing it to the USB disk as a file called '.' (just a period, meaning 'current directory')
When you run a directory listing you can't see it. You can't open it. The windows API doesn't allow you to create a file with that name and Flame accomplishes this by opening the disk as a raw device and directly writing to the FAT partition. Impressive, right.
While a lot of these individual features alone are not impressive the sum of the parts, combined with the collision attack on the certificate signature are very impressive.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Nobody knew about it for years, yet it was derided when discovered and documented.