Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Remotely Attacking Network Cards (or why we do need VT-d and TXT) (theinvisiblethings.blogspot.com)
27 points by wglb on May 1, 2010 | hide | past | favorite | 18 comments


"Advanced" network cards support IPMI with a protocol called RMCP, which runs over IP and can be delivered remotely. The cards implement IPMI/RMCP with an RTOS running on embedded RISC CPUs. Like every card with an embedded RTOS-running processor, they're coded in C.

There are two things that fall out of this.

First, when you write a protocol stack in C, you create memory corruption flaws. It's hard to name any piece of C code that has avoided this problem. Microsoft spends tens of thousands per dot release to have some of the best testers in the industry fuzz bugs out, and they still slip up. Dan Bernstein managed to let an LP64 overflow slip into qmail.

So, thing one: the trend towards more advanced network cards (and storage processors and motherboards and offload boards and etc etc etc) moves us to a place where our underlying hardware is vulnerable to software flaws, even if our operating systems and application code is extensively assured.

Secondly, the x86 security model moving forward is based on the idea that the chipset can assure that known-good code is running, and that the known-good code can use new chipset features to sandbox code, either in VMs or with runtime protection features. This model was designed mostly to defend against attacks originating from application and OS flaws.

But that model fails badly when the assumptions it makes about attack vectors fails. So, for instance, if you can take over the RTOS running on a network card, then however much Intel and AMD have planned to eventually deal with IO-level attacks, the systems deployed today get trounced by the DMA controller. Right now, if you can program the DMA controller, you win.

That's (I think) what Rutkowska has been saying for the past several years. She's right, of course. But the reality is that anything we do to address this problem architecturally is going to be unsound for years to come. So the immediate thing we need to do is test our complex hardware to raise the cost for attackers of discovering and exploiting these kinds of vulnerabilities.


First, when you write a protocol stack in C, you create memory corruption flaws. It's hard to name any piece of C code that has avoided this problem. [...] Dan Bernstein managed to let an LP64 overflow slip into qmail.

To be fair to Daniel, the bug you're talking about only exists in a configuration -- unlimited RAM per process -- which he explicitly recommends against. Saying "nobody will ever want to send emails larger than 4 GB" isn't entirely unreasonable.

Leaving that aside, I name the Tarsnap client code as having a protocol stack which avoids memory corruption flaws. Can you prove me wrong?


It took years and years to find the LP64 slipup in qmail. No, I'm not going to bother trying to prove you wrong.


No, it took a few minutes to find the LP64 slipup in qmail. It took years and years before someone (a) looked for it, and (b) looked in the right place.

Sloccount tells me that the entire Tarsnap protocol stack, from event-driven networking up to request/response construction/parsing, is 2732 lines of code. I'm sure that wouldn't take very long to inspect.


What about using a language immune to buffer overflow for the embedded controller? I know such languages can be made. (For example, one could implement a Smalltalk using formal methods, with an eye towards eliminating buffer overflows.) I'm not sure there currently exists a language suitable to implement such embedded controllers.

(Smallest Smalltalk image I know of was 45k. Squeak can be stripped down to ~ 350k, which is about half the size of Perl's runtime footprint in the 1990's.)


The people implementing these images don't even care enough to have their code reviewed (the vulnerability here appears to have been trivially fuzzable). They aren't switching to Smalltalk to deal with a problem they haven't even considered.


I'm not advocating that. I'm just pointing out that secure languages for embedded programming must be possible.


I'm not a "security expert", but it seems to be problem #1 with your approach is that C just totally dominates the mindshare in the embedded space, and until you solve that problem and its underlying causes, you have lost before you start. The average developer simply doesn't think security is a problem unless you exploit it right in front of them, a goodly number of those people will still deny it's a problem, and a good fraction of those who do agree it's a problem will declare that only C has the performance they need, with no interest in either a debate about the matter or the presentation of a language just as performant as C but also more secure.

Personally, I value security, and I think the evidence at this point is that you have to be a freaking genius to almost correctly code C programs and that's just not scalable, and thus the first thing I'd do is cross C off my list and just live with the results... but then, I'm not an embedded programmer, either. It was time for C to be taken out back and shot in 1980, before the world was networked together, but here it still is, providing us exploits by the cartload. Hooray.

("What should have replaced it, then?" Pick your choice of safe C dialect, which is a productive Google term. I don't deny C's performance or nature as "portable assembler", what I deny is that either of those two things necessarily requires security vulnerabilities to come in by said cartload.)


I'm not a "security expert", but it seems to be problem #1 with your approach is that C just totally dominates the mindshare in the embedded space, and until you solve that problem and its underlying causes, you have lost before you start.

"My approach" is not to do all embedded development in Smalltalk. I'm only applying facts from the language I know the best (not only as a user but from an implementation standpoint) to discuss what must be possible.


I was taking your larger first sentence. You don't even necessarily have to freak out an embedded programmer to get closer to secure, there are a variety of "safe" C dialects. (I don't know enough about them to even begin to judge them, but I don't deny that it ought to be possible.)

But I do think once you shook The Establishment free of C that they would not necessarily move to C+-; there would be a window to wedge some other things in there.


What about using a language immune to buffer overflow for the embedded controller?

I don't do anything with embedded controllers; but on FreeBSD I'd say that buffer overflows are responsible for at most 10% of the security issues we run into.

Eliminating buffer overflows is good, but make sure that you don't miss the pine forest because you're spending all your time looking for oak trees.


You realize you're commenting on a thread about a buffer overflow in a network card firmware image, right?


Of course. My point was just that there's more to worry about than just buffer overflows, and switching to a language which help you avoid buffer overflows is a rather drastic step to take to address just one type of vulnerability.


Yes, but on network firmware cards, I suspect that more than 10% of the vulnerabilities are buffer overflows.


Here's what's going on at a high-level.

1. Somebody discovered a security vulnerability in the implementation of a network protocol (i.e., a bug). The bug can be exploited remotely apparently. Which means that somebody from far away can break into your machine if your software has this bug.

2. This was used by someone else to push forward their agenda. In particular, their point is that newer dynamic root of trust trusted technologies are better than older static root of trust technologies. The goal of these technologies is to make a piece of code execute securely (i.e., without being compromised or modified by an attacker). Static root of trust can make a piece of code execute securely only by trusting the entire software stack from boot time until the execution of the piece of code. Dynamic root of trust bypasses this entire software stack -- it allows you to just verify that the piece of code hasn't been modified before it's being executed.

3. Now, the bug from step #1 can (in theory) be used for someone to "break" into the persistent storage of the NIC and compromise the NIC "forever" by changing its firmware to a "bad" firmware. This won't be caught by the static root of trust technology because such technologies do not typically check the firmware of a NIC card at boottime. And thus, in theory, the dynamic root of trust is "better" because it doesn't rely on making sure that the entire software stack remains uncompromised.

Now .. my opinions.

a. Remote vulnerabilities are very problematic because they lead to remote exploits. The lesson here -- get very experienced/senior/skeptical designers to implement networking protocols. Here's where it's worth hiring the smart guy. Implementing a new protocol in C from scratch is crazy. I'd fire this guy if he worked for me.

b. Dynamic root of trust is better than static root of trust on paper. In practice, dynamic root of trust is very hard to implement in a way that stays secure while doing something useful. When executing in "secure" mode with dynamic root of trust, you cannot use interrupts which basically means that's almost impossible to do anything useful (like sending or receiving a network packet).


Is there a "tl;dr" equivalent for an article that's completely buried in initialism soup?


tl;dr - Nothing is secure. Even Intel's specially designed Trusted Execution Technology (close to the Trusted Platform idea) has known flaws. You can be hacked at levels which you cannot control (firmware). It's tricky (not a script-kiddie level exploit), but possible and many existing holes are not published/known, because researchers would rather do something interesting than uncover yet another bug using the same technique. If you have government-level influence, start complaining to Intel (et al.).


You can remotely exploit bugs in network card firmware, and use that exploit to DMA read/write the host machine's memory AND overwrite the network card firmware permanently in EEPROM with firmware that you control.

The article also explains some potential defenses against this sort of exploit but notes that none of them are currently viable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: