Hacker News new | past | comments | ask | show | jobs | submit login
Chip Red Pill: Arbitrary [Micro]Code Execution Inside Intel Atom CPUs (offensivecon.org)
189 points by blopeur on Feb 7, 2022 | hide | past | favorite | 86 comments



One of the authors of the posted article Dmitry Sklyarov of the "United States vs. ElcomSoft and Dmitry Sklyarov" fame [0] was arrested by the FBI in 2001 for giving a talk at the DEF CON exposing the weaknesses in the Adobe e-book copy protection. After spending time in federal detention Dmitry was released and returned to Russia. The charges against him were dropped.

[0] https://en.wikipedia.org/wiki/United_States_v._Elcom_Ltd.


Your link doesn't actually say the same thing as your post. The charges weren't dropped. He was tried and found not guilty.


> The U.S. government agreed to drop all charges filed against Sklyarov, provided that he testify at the trial of his company.

I think the confusion is that there were charges filed against both him and the company he worked for, the former of which were dropped if he testified at company's trial (which was ultimately found not guilty).



Summary: An Intel CPU in "Red Unlock" mode allows any user-mode code to read and write its microcode. The paper teaches security researchers how to do it. They can use it to discover undocumented Intel CPU internals and functionality.

I don't know if these instructions will be useful in exploits. They require the CPU to be in Red Unlock mode. One known way to enable Red Unlock requires connecting a special cable to the motherboard's USB port and exploiting the Intel Trusted Execution Engine core [0].

There are probably remote exploits via ethernet and the Intel Management Engine.

Could there be some motherboards that shipped with Red Unlock mode permanently enabled? User-mode code running on such machines could trivially root the machine and even escape a hypervisor.

[0] https://github.com/ptresearch/IntelTXE-PoC


thanks for linking to the paper, that contains considerably more information


> All the modern Intel CUPs have a RISC core inside..

This reminds of (yet another) mind-blowing Chris Domas video: https://invidious.snopyta.org/watch?v=jmTwlEh8L7g


That's a very different kettle of fish - the "RISC" core inside an Intel processor isn't really RISC and isn't really a core (it is the processor), whereas those cheapo VIA ones are actually a X86 translation layer on top of a RISC machine


Is that links supposed to be https://www.youtube.com/watch?v=jmTwlEh8L7g, the one you used didn't work for me


reload or reopen in a new tab


neither of those worked.


... which turned out to be a fraud by scientific standards because they flag he "discovered" was a thing documented in the datasheet.


A) He said in the talk that he didn't have the manual for that specific processor.

B) The whole talk was about a tool that automated the process of discovering hidden opcodes in a chip.


He said in the talk that he didn't have the manual for that specific processor.

That doesn't excuse the lack of attention to detail and doing research before spinning it as something new. Especially since this was in the publicly-available datasheet that one could find with very little searching.


What about the part where he developed a tool that automated the process of discovering hidden opcodes?

If _I_ were going to make such a tool, I would use the data sheet to confirm the tool did what it was supposed to. Given that he lacked the data sheet at the time he developed the tool, it still seems pretty impressive that the tool worked.


Then he should have it presented it as tool to discover hidden opcodes. Everything else is dishonesty.


Documented in the datasheet but not turned off.

The theatre, sure, was just that, but the process was not fraud.


It seems that website has a "blackhole links with a HN referrer" rule. copying into a new tab works.


That's likely to avoid the Hug of Death; you can use another Invidious instance (or YouTube directly, even) instead.


Copy-pasting into the same tab works as well - just make sure to use that and not refresh


AKA: xoreaxeax That guy is a one man nation-state level exploit developer.


Battelle Institute, his employer, is one of CIA fronts for black projects.


It's getting increasingly difficult to trust American-designed chips and systems for me at this point... and I can't help but feeling that people paid way, way too little attention to everything we learned from the NSA and CIA leaks a few years back.


If you don't trust American chips, then what chips do you trust?

We certainly don't have much choice when it comes to choosing chips. It's an incredibly expensive process and only a select few superpowers can successfully maintain semiconductor industries.


I trust anything made by European companies, NXP, Philips, ST, Siemens etc., anything from Japan, South Korea, and most of the Taiwanese and Chinese companies.

Unfortunately, with some rare exception, they're not allowed to make x86-compatible chips, because the U.S. has worked long and hard to forbid the ISA, and everything used so far to implement it, from being standardised and thus kept under an unbelievable weight of patents.

Hopefully the build-up of more European fabs, and realisation that the EU has to make its own chips, will eventually remedy some of this.


What a bizarre pile of mystical euroism. NXP/Phillips sued a university to quash research about the garbage security of their contactless smartcard implementations. I'd rank NXP/Phillips way down at the bottom of the stack with state-owned Chinese semi firms.


The modern NXP has a rotten company culture. Which is unfortunate since Freescale that got bought by them had a much better culture.


My experience with ex-Freescale employees has been a consistent tire fire. If NXP is worse…


The semi industry is really cutthroat. Outside of US giants, Intel, Nvidia, Quallcomm, and recently AMD, who cream the highest margins as their products are basically irreplaceable, the rest of the semi companies (including most EU ones) are just competing for cost (barring the current shortage where they could amp their prices too).


Your wording here clearly shows you have something against Europe and possibly Europeans. I won't be reading any of your comments from here on.


And Chinese?

They seem... certainly not more trustworthy than US companies.


I disagree, an important difference is this: The U.S. has been proven definitely guilty of all accusations of espionage, sabotage, backdoors etc., while there have never been any clear proof presented for the Chinese counterparts -- only accusations, and overwhelmingly from the very country that has committed all the wrongs itself.

Take whatever side you want, but at least keep to the truth.


x86 isn't patented past AVX2 or so. It's too old, they've expired by now.


Welcome to the rabbit hole!

https://www.washingtonpost.com/graphics/2020/world/national-...

The whole thing is worth reading, but be sure to read the section “The Irreplaceable Man” if you work anywhere near computer security. Once you understand the tactics used by crypto front companies to keep their employees in the dark, it should be pretty easy to spot such companies from the inside.


Good read, and it hints at just how absolutely nuts the situation is be today with almost everyone in the world having cellphones, tablets, laptops and what else running predominantly American-designed software and hardware.

Nothing is private or secure.


See also, the story of allegations of attempted tampering of OpenVPN: https://lwn.net/Articles/420858/


* OpenBSD


A lot of their claims sound very scary but I don't really know what to make of this as an average user. Is there an explanation to what this stuff means for me? What security vulnerabilities does this allow for on Intel CPUs? Are they worth worrying about? On a scale of "minor bug" to "every single intel chip is easily able to run arbitrary code and is vulnerable on a hardware level" how bad is this exactly?


The instructions they found are only usable when in a special unlocked debug mode that they managed to access through exploiting the Intel CSME at a very early boot stage. Part of the reason this is about ATOM is that they haven't unlocked it yet on other, more recent desktop processors.

So no immediate reason to worry but all the more reason to ask Intel to get all it's fucking "management engine" crap out of processors. There is no reason for this mode to be in production processors at all, or not behind a hard blown fuse.


Chipmakers are doubling down on coprocessors with Pluton, first to market in Amd Ryzen 6x series chips. It's a rootkit on your machine plain and simple.


Depending on your security needs, you can go back to transistors à la Ben Eater. Or an Arduino, or a permanently-offline device with its Wi-Fi card unplugged and its Ethernet cable desoldered or poured-in with hot glue/plastic.

Also desolder/unplug the speakers/microphone while you're at it; there may be some air gap leaks that way. Ideally, only use it on a separate power supply also (say, an older vehicle without networking capabilities, or a solar panel).

Check out the Glacier protocol for inspiration: https://glacierprotocol.org/docs/overview/


Just curious. Recently I have picked up the hobby of reverse engineering and I read that some electronics components such as a ROM can be protected by blowing the fuse after writing to it. Is there a physical, affordable way (under 5K USD let's say) to reconnect the fuse somehow?


Reconnecting the fuse isn't a thing, as the other comment notes it's just a bunch of atoms somewhere in the chip. But in case of ROM protection for microcontrollers the fuse doesn't directly, in silicon, prevent reading the ROM - after all the controller still needs to be able to read its code! So in practice the fuse only disables e.g. JTAG access, and often that isn't even done in silicon but through a manufacturer boot ROM that checks the fuse state, then either disables or enables JTAG.

So often the easiest way to still read the ROM is to find some vulnerability in the code the chip is running and use that to run some code that can leak the ROM contents. Beyond that it's possible with voltage glitching and similar fault injection methods to get the chip to skip the fuse check or put it in some other indeterminate state that allows access - there are some well documented methods for popular microcontrollers:

https://www.aisec.fraunhofer.de/en/FirmwareProtection.html


No, that is usually not possible. Sometines you can reset write-once-bits by shining UV light in the right part of the processor die, or you can try to circunvent the fuse checks by fault injection during, https://media.ccc.de/search/?q=fault+injection


The cheapest way to reprogram a blown efuse is to buy a new efuse. In some cases, the efuse is a separated chip but it can be integrated into the CPU or Microcontroller, in which case your only option is to find one of those without the efuse blown.

The fuse generally is basically the size of a normal transistor on the IC, good luck getting that repaired otherwise :)


The fuse generally is basically the size of a normal transistor on the IC, good luck getting that repaired otherwise

https://en.wikipedia.org/wiki/Focused_ion_beam

I'm not sure what resolution they've achieved now, but it's definitely possible to either cut or repair connections using such a machine. It's a very expensive process, but definitely within the capabilities of even the far-East "MCU break" companies.


> The cheapest way to reprogram a blown efuse is to buy a new efuse.

I think this somewhat undersells what a monumental task un-blowing an e-fuse on a chip would be (e.g. resetting something like the USB boot enable bit on the RaspberryPi's SoC). If it's even feasible with today's technology, it would take a top-class semiconductor research facility and significant effort to pull it off. It's $millions compared to taking a new ~$1 chip off the shelf.


Every time i read this 'intel managment engine issue' i am like 'oh, it's that time of the year again'


> "every single intel chip is easily able to run arbitrary code and is vulnerable on a hardware level"

It's not just Intel though. Both AMD and Apple also have their own "security processor" running its own operating system without your consent in your own hardware. Essentially this means they own the hardware, not us. I don't know if you would call it that, but to me it sure looks like the definition of a universal backdoor.


Yes to AMD, not to Apple. There is no secret core on Apple chips that has control over the system. They are much better designed than AMD/Intel CPUs and strictly isolate all secondary cores behind IOMMUs.

By "security processor" you probably mean the SEP (which is one of many secondary cores running Apple firmware). The SEP isn't even running when the OS boots and you can just leave it dead and choose not to use it. Other side cores are running or required to get a functional system, but none of them have unfettered access to system memory or the main CPUs (nor does the SEP even if you choose to use it).

In addition, since Apple splits up coprocessor duties among many cores, that also makes it harder for one to compromise others, or for several to collude to compromise the parts of the system they do have access to. E.g. the coprocessor in charge of the display controller can't go and read your keystrokes or send a capture of the screen to the internet, because it doesn't have access to that hardware.


>There is no secret core on Apple chips that has control over the system.

And this has been documented/proven by whom?


This is based on having spent a year reverse engineering the platform to port Linux to it.

Granted, Apple could've added a hidden secret core with secret firmware we haven't found anywhere, somehow. Of course, so could every other manufacturer. If you are concerned about all potential secret backdoors, you'll have to invest in a chip fab and make your own chips; there is no way to prove that the chip you have in your hands is not uniquely backdoored, no matter what documentation you have or not. Documentation cannot prove that the physical chip matches what was documented.

What I can say is they've done a really good job keeping tight security boundaries in their chips, much better than basically every other manufacturer, by all appearances. What I meant by "secret" in my previous comment was a difficult to observe core with full system access running proprietary code, a la ME or PSP; not literally something that is deliberately hidden so as to be completely undiscoverable. So far, there is no evidence of Apple having added any questionable hardware to these chips.


Or, and here is a crazy idea, maybe they use a dedicated security chip to improve isolation and achieve better security?

Don't know about apple, but the AMD security firmware has been reverse engineered and no backdoors have so far been found.

https://github.com/PSPReverse/PSPTool


> maybe they use a dedicated security chip to improve isolation and achieve better security?

I call bullshit on this one. Security through obscurity is not a feature. If you want to have a dedicated security chip protecting the users, we need to have all schematics and source code to ensure it's safe and adapt it to specific use-cases.

Ideally, that security processor would be physically separate from the usual hardware and operating on specific data lanes. Just like secure smartphone designs don't give the modem hardware access to all phone memory. That the "security chip" is bundled with the CPU/chipset and answers to undocumented (secret?) x86 instructions means to my understanding that it's impossible (unless you're reverse-engineering the computer at the hardware level) to understand/restrict whatever it's doing.

I mean i find it funny that many of us are running Minix without even knowing it, making it one of the most popular OS on the planet. And i'm glad some security researchers figured out how to disable Intel ME completely via some undocumented instructions. But i would certainly feel safer and more in control if i could know/inspect wtf hardware is doing inside my machine. I would certainly trust a free-hardware CPU without a security chip more than a modern Intel/AMD CPU any day of the week.

Some previous related discussions on undocumented hardware:

- https://news.ycombinator.com/item?id=28977175 OK Lenovo: we need to talk (october 2021)

- https://news.ycombinator.com/item?id=28374523 It's time for Operating Systems to Rediscover hardware Usenix keynote (august 2021)


I think you may be confused about what "security through obscurity" means. The security from these coprocessors is not from obscurity but is part of the design.


I like that we can disable Intel ME entirely via undocumented instructions. The fact that whether it's running at all and whatever computation is running depends on hiding certain x86 instructions and the code/memory of the security module is what i refer to "security by obscurity".

If everything is taken into the design, great! Just open up the design and let us check. Also, if we can't change/upgrade the firmware without cooperation from Intel and motherboard manufacturers, how are we supposed to operate such hardware after Intel/others claims EOL? Or after those corporations die?

I'm not exactly a fan of hardware tokens like Yubikeys but at the very least the design is clear/open. You know the protocol and can probe things around. Those things i don't call security by obscurity (unless your threat model involves someone doing electronics RE on your key). And if a flaw is found with the design, "just" get a new key no need to throw away your entire motherboard. (I still find environmental waste to be a major problem with hardware tokens but that's a discussion for another day)


>many of us are running Minix without even knowing it

This is news to me; any more info?



The Intel source code was basically leaked for a bunch of their firmware and nothing particularly interesting got out.


Sure, but those of us paranoid about ME can't help but wonder if that was savvy PR.


Any links? Interested to read more about it.


It did the rounds on here when it happened


Security from whom?


Good to see people starting to ask the right questions; (see Snowden: "Permanent Record", Farnell: "Digital Vegan", Anderson: "Security Engineering") - Security for who? Security from whom? Security to what end? There is no such thing as "bare security". No tide that raises all ships. "Security" is now a constant sum game. Your security is my insecurity.


Yes, but then you can probe the pins and MITM, like you can with an external TPM chip. (See something like the TPM Genie) It's internal because it's more secure that way.


These are internal soc components, for this very reason.

Not sure if we are agreeing or not...


I read 'dedicated security chip' as something akin to an external chip like a TPM chip as opposed to dedicated transistors in the CPU die (which is more secure). So it sounds like we're agreeing.


> How bad is this exactly?

7.1 out of 10

https://www.intel.com/content/www/us/en/security-center/advi...

This was part of IPU 2021.2 in November and other bugs in other CPUs were fixed as well, one rated 8.2 affecting many, but not all, intel cpus (intel-sa-00562). If you haven't updated your UEFI/BIOS since last November and you love to run untrusted code by untrusted third parties on your CPU, the larger story about microcode bugs and privilege escalation into the management engine is bad and best fixed keeping up to date with microcode updates.

Please note that the IPU 2021.2 fixes are not runtime loadable: https://github.com/intel/Intel-Linux-Processor-Microcode-Dat... and require an UEFI/BIOS update.

The concrete story is mostly about a problem with the Atom line, which allowed code execution in the IME and a full dump of it. Very interesting for nerds that want to hack into that for reasons of taking ownership of what they bought, less interesting for malware, but not irrelevant. The link is about the Chip Red Pill Team giving a talk about how they broke the Atom and what intel hides inside.

It is a good talk by people who have years of experience cracking open Intels chips. I can recommend it to anyone who cares about that.

From a news perspective however it is three month old, if we ignore the zero nights russia talk in September ;-)


> every single intel chip is easily able to run arbitrary code and is vulnerable on a hardware level

Isn't this what we want?

What's the alternative? Your CPU can't run arbitrary code? Having hardware access doesn't give you complete control of the hardware?


I mean you're being nitpicky with my words here. I'm obviously talking about malicious actors when I use the words "vulnerable".


When we are living in an era of "war on general-purpose computing", where lots of people in the industry work very hard on taking away our freedoms, it is very important to be precise.


That's a really asshole cookie banner. It looks like only essential cookies are checked, but the big red button accepts all cookies, ignoring the selection. You need to careful read and select the second button to only accept the selected cookies.


I vehemently disagree. This is an exemplary cookie banner.

There are two clearly labeled buttons with good contrast, one for "accept all" and one for "accept selected" (with the latter even being a bit bigger and more visible), plus a third (less visible) "decline" button for those who don't want to check whether the default is checking only essential cookies.

Moreover, if you actually take a closer look, none of the cookies the banner asks for are actually the usual user-hostile ad/tracking cookies. The only thing you can even turn on/accept is third party embeds.

Clicking "accept all" on this dialog puts you into a more privacy-friendly position than clicking "deny all" on many websites (because they then still use 20+ trackers claiming legitimate interest), and IMO this site is one of the few that isn't trying t to trick you into agreeing, and it has two single-click opt-out buttons.


I appreciate your insight about the cookies themselves, but I still think the banner is deceitful. If I see "Required" checked and "Features" unchecked I would expect that the primary button respects that decision. The fact that the primary button ignores the content of the form is very surprising to me.

It is easy to think of better UX patterns:

- Check both by default and have only one submit button. This is intuitive but IIUC disallowed by GDPR because it makes opt-out harder than opt-in.

- Skip the checkboxes and simply provide "Required cookies only" or "All cookies". This way there is only one place to make the choice and they aren't ignoring the checkboxes.

- Just remove the "Accept all" button and make the primary form button "Accept accepted".


To me, the banner was clear. But I agree the implementation could be improved.

- There are 2 "required" features. You don't need to get permission to place cookies that are needed to make your site function. Placing a cookie to track your consent is perfectly fine, no need to make that optional, or even mention it in the cookie banner. Same goes for the session cookie: if you need it just set it. You could question if you actually need it in this case, but as long as it's a true session cookie and not persisted I would consider it not personally identifiable.

- There are 2 optional features: Youtube Videos and Google Maps. Why do I have to fold open "Features" to find out what the features are? Just show me the list already. Hiding the list is a dark pattern employed by advertisers to get you to agree. In this case the features are actually valuable: embedded videos and embedded maps.

- Those 2 optional features are not even used on the linked page! Then why does it show me a consent banner?

For some reason people hear gdpr compliance and just slap on an annoying consent modal popup.

A much better solution is to just put an embed placeholder with the title of the linked content, and warn the user that 3rd party wants your personal data. Put a link to a detailed privacy policy, and a link to enable the embed. At that point record the consent and enable the embed.


The checkboxes allow detailed control for those who want it and provide some easy to understand info for what consent is required. Skipping them in favor of some generic "yes to all" is a dark pattern


This is literally the best cookie banner I've ever come across, unironically.

Maybe you're less used to seeing them because you're not from the EU, but most cookie banners / popups have at least one or more actual obnoxious user-hostile patterns going on:

- Actual really hard to find decline / accept selected button, sometimes obfuscated behind a small, badly colored "More information" link. - A list of 60-120 "partners" (no joke) with no way to see what functionality they actually provide - No decline button at all - Pressing decline (or "agree to selected") makes the modal popup spin a spinner for 60 seconds while supposedly processing your request. Pressing accept makes it go away instantly - 4-5 categories. Required (/Essential), Functional, Marketing, Statistics with no clear explanation of what will be impacted when you disable a category. - No way to accept cookies from Vimeo embeds, but not accept cookies from Google (YouTube) embeds

I could go on and on.

This site on the other hand doesn't even have analytics or any of the scummy stuff the cookie consent law was designed to thwart, they clearly provide an explanation of each category and your options, all the buttons are the same size but clearly differentiated and was actually the first site I've ever(!) accepted all cookies from.


There is a Decline button that closes the dialog immediately which is always appreciated.


Unfortunately, this is a dark pattern used by almost every website that has a banner. The “default” button is a “ignore my selection and accept all” as opposed to just being “save my current selection”.


Every day I am ever thankful for https://www.i-dont-care-about-cookies.eu/


Accepting all cookies is just as easy, and that's what the extension probably does (they say "sometimes", but we all know what that means). Perhaps www.i-dont-care-about-privacy.hr would be a better domain name.


Not if you don't save cookies after tab close, and if you have third party cookies disabled.


The EasyList Cookie filter list (supported out of the box but disabled, in uBlock Origin) covers this too.


I was able to remove two or three other browser extensions when I started digging into uBlock Origin's capabilities. I also use it as a replacement for NoScript; it's easier to sync my allowlist between multiple computers with uBO than it is with NoScript.


Just use the 'Decline' button to the left.


Asshole, or genius?!

I love anti-patterns. My favorite is Android, in response to a government somewhere, every half year or so being required to ask my permission to destroy what shred of privacy I am still afforded. "What's the most honest way to meet the legal requirement and ask the user in good faith?" I hear you ask. Good question, and the solution is simple: randomly interrupt the user's existing workflow with a popup. Comprehending this non sequitur is near impossible. Rejecting the proposal is met with a caution that certain features may not function as expected and guarantees future interruption by the exact same question. And all this if you didn't accidentally click OK straight away because the popup jumped in front of whatever you were already trying to click in your intended app. 0.2 seconds of transient grey screen beyond which you regret that you have just given them permission to something you would never normally accept if it were explained in a single simple sentence, and undoing it is impossible. Sleazy.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: