"Finally, we are going to require that Qubes-certified hardware does not have any built-in USB-connected microphones (e.g. as part of a USB-connected built-in camera) that cannot be easily physically disabled by the user, e.g. via a convenient mechanical switch. However, it should be noted that the majority of laptops on the market that we have seen satisfy this condition out of the box, because their built-in microphones are typically connected to the internal audio device, which itself is a PCIe type of device. This is important, because such PCIe audio devices are – by default – assigned to Qubes’ (trusted) dom0 and exposed through our carefully designed protocol only to select AppVMs when the user explicitly chooses to do so."
This made me download Qubes. Amazing project that seems to care.
CPU firmware is likely the worst type of compromise (see Intel ME). However, the issue is that private information can be gained by listening in on conversations near a laptop or by recording what the camera sees.
Keylogging isn't good either, but if you're using a password manager and/or 2FA then it's not really as big of an issue. It is an issue for your disk encryption passphrase, but I'm hoping that in the future we might be able to remedy that through some 2FA-like system[1]. If we seal disk encryption keys inside TPMs then we have to only come up with a sane security policy (which is obviously the hard part).
Disk controllers are similarly not an issue if you have full-disk encryption (though then your RAM is the weak point because it contains the keys). There was some work in the past about encrypted RAM but I doubt that is going to be a reality soon. The real concern is that a worrying array of devices plugged into your laptop can DMA your memory (USB 3.1, PCI, etc). iommu improves this slightly but from memory there is still some kernel work necessary to make the order in which devices load secure (if you load a device that supports DMA before iommu is loaded then you don't have iommu defences).
> CPU firmware is likely the worst type of compromise (see Intel ME).
I see it, and I see the AMD and ARM equivalents, and I'm sitting here wondering how the hell do I buy a decent laptop without that crippling trust hole. AFAICT, one cannot.
I'm willing to pay more for processors that aren't thus afflicted. Is anyone at AMD, Intel et al listening?
I believe so too. OpenPOWER and RISC-V show great promise but I am not aware of any significant tape-outs for either (and not to mention you have to have consumer motherboards et al that are compatible with the chipset).
The nice thing about OpenPOWER is that there are many distributions (openSUSE is one that I know for sure) that provide some support for ppc64le and thus the transition shouldn't be too painful from a port-the-distro perspective. RISC-V also will have similar support once it's merged into the mainline kernel and also once distributions have significant confidence to spin up some QEMU build images for RISC-V.
> I'm willing to pay more for processors that aren't thus afflicted. Is anyone at AMD, Intel et al listening?
I am inclined to believe that the reason is economic rather than them just being evil (that doesn't mean that it's not a horrible misfeature that mistreats users, I just don't think that the inclusion of ME on consumer hardware was an intentional decision). Intel ME is "required" for enterprises because sysadmins want to be able to control all of the machines they provide their employees (you can have varied opinions on whether that's ethically acceptable, but that's the reason).
Given that consumer hardware generally comes from the enterprise world after it has dropped in value, I would not be surprised if Intel ME was left in consumer CPUs simply because it was cheaper than removing it. There's also the (weaker) argument that an enterprise should be able to use Intel ME on a BYO-device system, but that strikes me as unethical.
You might be willing to pay extra for Intel ME-less CPUs, but have you seen what the bill is for a full tape-out? There needs to be significant market demand for something like that.
> but even a sysadmin at a fortune 500 company is in the dark about all that this second cpu can and can't do.
The sysadmin might not know how it works, but they do know they can control machines remotely using their Intel branded management system (or other rebranded variety). Just because they don't know how bad it is doesn't mean that's not the motivation for it.
IPMI is a similar deal. Modern servers have a secondary computer embedded in the motherboard (which have been historically _very_ insecure) because it's useful for managing servers. Intel AMT is the work-laptop version of that technology, and you can bet that most enterprises use it.
> if it was economical they would offer you to pay more for full control for it.
But they do. The entire reason why enterprise deployments of large numbers of work laptops/desktops is so expensive is because you have to pay extra for the management system that comes with it. Just because they don't remove the "backdoor" in their consumer lines doesn't mean they won't charge you through the nose to be able to administer the damn thing.
I am very anti-ME and wish that all firmware was free software, but arguing that the reason why ME is present in consumer CPUs is not for economic reasons doesn't sound right to me. The reason why the technology was developed is because the developers were not aware how unethical their actions were, and that's where the core of this problem lies.
I was referring to TOTP. Some 2FA doesn't protect against this, sure. But the whole point of TOTP is that the codes change over time, so having the current TOTP token is entirely useless. This is literally the threat model of TOTP. The only way of breaking it is to get the shared key (which a keylogger won't do, and you need to attack the TOTP device).
Most services I use have 30 second TOTP codes, but if you're facing an attacker that can perform an on-demand replay attack in the same time it takes for GMail to load then you have much bigger problems (like hijacking browser sessions). Also, my response was in relation to saying that there was "no security improvement" which is simply not true.
I'm not sure we're discussing the same threat model here. If you're worried about long-term compromise then that race window is a much smaller concern than the fact that having a TOTP code makes it so that an attacker can't just keylog you and get the password at a later time.
Agreeing on threat models is the first step in any discussion about security. Does your threat model include being so badly owned that a keylogger on your machine can exfiltrate data so quickly that someone can replay your login session? Is that a reasonable threat model? Is it helpful to require that to be solved or otherwise not be considered good enough?
no worries :) I'm kind of interested, because lack of trust in microphones/cameras specifically on laptops is a theme I've seen commonly expressed by people in general IT and IT security.
My thinking on the subject was roughly that for an attacker to have the ability to spy on me via that mechanism would strongly imply that they already have privileged access to my computer (to be able to active the device and exfiltrate the data).
At that point, personally, I'm far more worried about the data they'd get from my keyboard (specifically credentials for various systems) than I am about them being able to see me sit at a desk.
Many people don't have high powered credentials on their work computers and have safer personal use devices. Eg Windows work laptop from corporate IT vs iOS personal devices.
Another type of user keeps confidential stuff out of networked computers and the cloud entirely.
Anyway, I'm not going to take the laptop apart and analyze the internal microphone hardware to make sure that the switch actually disables the mic. So even in that case, I'd assume the mic was still on even if the switch was in the off position.
On the other hand, I'd prefer to buy a laptop with a hardware switch for the internal microphone, if one existed, as it's better to have such a switch in case it actually does work as advertised.
So you don't own a laptop, smartphone or tablet? How do you live your life peacefully while there are dozens of devices around you with internal microphones?
You don't do business covered by serious NDAs on non EMSEC/COMSEC equipment. You do not talk about sensitive information in a wired room.
In your personal life just leave your microphoned laptops/phones in a box in the room next door.
Two birds, one stone: less time spent behind a screen unless you need it, and your tinfoil-hat friends feel safer!
USB 3.x controllers are more complex than predecessors and typically run some firmware on the controller chip to implement functionality which used to be implemented in the OS drivers.
AMD is not going to open-source or disable PSP. That thread was four months ago and they still haven't even commented publicly on it. See this recent update from 8 days ago:
If AMD really wanted to, they would have announced something by now. I can commend the AMD rep who continues to push for it, but he can't dictate company policy.
Yeah, given Intel's support for TXT and SGX which are basically moves in the opposite direction, I highly doubt they want to lose customers over things like not being able to play some new DRM'd content from Netflix. The gain from this is probably minimal compared to the loss from something like that. While I'd definitely buy it, the market potential of it seems limited overall.
Though it's possible they could just offer it as an option on some chips to get both markets which would do the job, but given the sheer diversity of ARM and possibly RISC V SoC vendors, those might be a better starting place than x86.
Can someone please explain, from a practical perspective, what does RISC-V give your average user?
Right now, I want a secure laptop. I can't by one (nothing which doesn't require binary-blobs), so I decide to make my own.
What do I need?
1. An instruction set.
2. A factory.
3. Customers (and a lot, so power of scale can make it somewhat reasonably priced).
All RISC-V could help with is #1. I won't have to contract from ARM and will save some cash there.
But I'll still need to build a factory and deal with economy of scale.
Moreover, what will prevent companies from leaching off RISC-V and patenting improvements. As I understand, there are so few foundries right now that that can easily cross license patents from each other and prevent upstarts from breaking in (so you'll have a situation where the industry leaders end up organizing themselves into something which looks like ARM or Intel/AMD)?
That RISC-V stuff looks pretty appealing - I'm looking the Freedom E310 now and one thing I'm sort of confused by.
Is only the architecture open, but the silicon isn't? If so, what's the point of that, is the only thing differentiating it from ARM that there's no licensing fee?
If so, are there any ARM SoC vendors making them in a way that they're relatively free from stuff like Intel ME?
Sorry for the late response! (was out of the country)
RISC-V is a free and open instruction set architecture (ISA). People can go ahead and build open-source implementations, closed-source implementations, licensed implementations. This is very different than ARM, where you can only buy implementations from ARM, or if you happen to be one of a handful of selected companies with an ARM architectural license (which costs $$$$$), you can build your own implementation, but they still have to meet certain specifications as dictated by ARM. People can freely implement RISC-V processors, extend them, and play around with it. We think RISC-V has a big potential to unleash innovation. As a matter of fact, we believe this is the prerequisite.
SiFive has made the RTL open-sourced that went into FE310. We think this is a big deal, because other SoCs don't open-source their RTL.
> Another important requirement we’re introducing today is that Qubes-certified hardware should run only open-source boot firmware (aka “the BIOS”), such as coreboot.
I recently flashed coreboot on my X220 (and it worked surprisingly enough). However, I couldn't find any solid guides on how to set up TianoCore (UEFI) as a payload -- does Qubes require Trusted Boot to be supported on their platforms (I would hope so)? And if so, is there any documentation on how to set up TianoCore as a payload (the documentation is _sparse_ at best, with weird references to VBOOT2 and U-Boot)?
Otherwise I'm not sure how a vendor could fulfill both sets of requirements.
If I read that right, they're allowing Intel ME, which sounds like a sad compromise to me. Given that it's a pretty big complex black box that one can't easily disable, would you agree that x86 is doomed when it comes to security? If that's the case, is there any hope we could have a CPU with competitive capabilities? (For example, is there an i7 alternative for ARM?)
What could one do to make it possible to have ME-less x86 in the future?
There are open processor designs, e.g. many SPARC designs are published. You could run a reasonable webserver or what have you on those. But only the mega-corps chipmakers are going to be competitive with the current state of the art, and the mega-corp chipmakers are going to include ME or equivalent because their mega-corp clients want it.
More generally if the processor's going to have any dynamic internal logic then that has to run somewhere. Frequency scaling, wake-on-lan, microcode updates... you probably do want an ME-style embedded management processor that runs the processor's firmware just as you would for any other peripheral (hard drives, wifi controllers and so on all contain their own embedded ARM cores these days). ME itself isn't the issue - having what runs there be open and inspectable is.
It should be noted that ME is significantly less powerful than the main CPU cores.
If performance is not a huge concern, one could (in theory of course) design software so cpu/memory-hard that the ME is simply unable to perform meaningful key material recovery for FVEY.
I'm curious: if we can solve the "trusting trust" problem - that is identifying compromised compilers, even if the other compiler is compromised - couldn't we potentially solve this problem in a similar way?
Start with a simple CPU and memories you can hand-check sent to a 0.35-0.5 micron fab that's visually inspectible. Then, after verifying random sample of those, you use the others in boards that make the rest of your hardware and software. You can even try to use them in peripherals like your keyboard or networking. Make a whole cluster of crappy, CPU boards running verified hardware each handling part of the job since it will take a while. You can use untrusted storage if the source and transport CPU's are trusted since you can just use crypto approaches to ensuring data wasn't tampered with in untrusted RAM or storage. Designs exist in CompSci for both.
So, you'll eventually be running synthesis and testing with open-source software, verification with ACL2 a la Jared Davis's work (maybe Milawa modified), visual inspection of final chips, and Beowulf-style clusters to deal with how slow they are. And then use that for each iteration of better tooling. I also considered using image recognition on the pics of the visual trained by all the people reviewing them across the world. More as an aid than replacing people. Would be helpful when transistor count went up, though.
Ultimately a compiler is just a bit of software; one that takes inputs and produces outputs. The identification of compromise is the difference in outputs for the same inputs (simplified, of course).
So, given we can control most inputs to hardware, and most outputs, it seems possible to objectively identify when the HW is misbehaving (such as "A" produces network output that "B" does not). It wouldn't nail down which piece of hardware was compromised, but it would help identify that hardware is compromised.
It will never be _that_ easy, of course... but it seems possible.
It's a solved problem. Paul Karger, who invented the attack and concept in the 1970's, immediately worked with others to solve it with rigorous methods called high-assurance security. Far as this problem, it's mainly a problem of people you trust reviewing it, it getting distributed to you, and you verifying you got what they reviewed. With most distro's, it boils down to that since you have to trust millions of lines of code (maybe privileged) in the first place. SCM security of a trusted repo becomes the solution. Wheeler covers SCM security here:
Now, let's say you want to know the compiler isn't a threat. That requires you to know that (a) it does its job correctly, (b) optimizations don't screw up programs esp removing safety checks, and (c) it doesn't add any backdoors. You essentially need a compiler whose implementation can be reviewed against stated requirements to ensure it does what it says, nothing more, nothing less. That's called a verified compiler. Here's what it takes assuming multiple, small passes for easier verification:
1. A precise specification of what each pass does. This might involve its inputs, intermediate states, and its outputs. This needs to be good enough to both spot errors in the code and drive testing.
2. An implementation of each pass done in as readable a way possible in the safest, tooling-assisted language one can find.
3. Optionally, an intermediate representation of each pass side-by-side with the high-level one that talks in terms of expressions, basic control flow (i.e. while construct), stacks, heaps, and so on. The high-level decomposed into low-level operations that still aren't quite assembly.
4. The high-level or intermediate forms side by side with assembly language for them. This will be simplified, well-structured assembly designed for readability instead of performance.
5. An assembler, linker, loader, and/or anything else I'm forgetting that the compiler depends on to produce the final executable. Each of these will be done as above with focus on simplicity. May not be feature complete so much as just enough features to build the compiler. Initial ones are done by hand optionally with helper programs that are easy to do by hand.
6. Combine the ASM of compiler manually or by any trusted applications you have so far. The output must run through assembler, linker, etc. to get the initial executable. Test that and use it to compile the high-level compiler. Now, you're set. Rest of development can be done in high-level language w/ compiler extensions or more optimizations.
7. Formal specification and verification of the above for best results. Already been done with CompCert for C and CakeML for SML. Far as trust, CakeML runs on Isabelle/HOL whose proof checker is smaller than most programs. HOL/Light will make it smaller. This route puts trust mostly in the formal specs with one, small, trusted executable instead of a pile of specs and code. Vast increase in trustworthiness.
@rain1 has a site collecting as many worked examples as possible of small, verified, or otherwise bootstrapping-related work on compilers or interpreters. I contributed a bunch on there, too. I warn it looks rough since it's a work in progress that's focused more on content than presentation. Already has many, many weekends worth of reading for people interested in Trusting Trust solutions. Here it is for your enjoyment or any contributions you might have:
If I understand it correctly, ME has basically unrestricted access to RAM, bypassing the CPU and any restrictions the hypervisor and/or operating system may impose.
If I can peek and poke around in your RAM as I please, no amount of cleverness is going to save you if my intentions are malicious.
(Don't worry, though, I have no such intentions, and I don't fiddle with other people's RAM as a matter of principle, unless they ask me to. ;-))
You can prevent certain things through address randomization and by using canaries to try and detect intrusions. I think if you made the system self modifying and incorporated a true RNG, it would be theoretically possible to obfuscate it at run time to malicious observers assuming attackers haven't seen observed the complete obfuscation process.
When you're running megabytes of proprietary code on numerous processors in your laptop completely out of your control, why do you focusing on Intel ME? What about your network card which runs dedicated processor with some kind of operating system, executing firmware and processing every network frame before your OS receives it, for example?
When the network card tampers with the packets, this can be detected if the network protocols use the correct cryptographic algorithms to ensure integrity and confidentiality. Protecting against tampering on the CPU level is much harder, since this is where these algorithms are carried out.
If you think you're going to catch every possible NIC-level modification, does tampering on the CPU really matter if there's no way to store or exfiltrate the data without being detected?
Qualcomm will first minimize its risk going into the PC market by making its mobile chips a bit more optimized for the PC market. But if that goes well and all, it may eventually go "upmarket" with higher-end chips, which will require their own R&D and so on.
However, I don't know if that necessarily means we'll have a more open alternative to Intel. Evidence so far seems to show that Qualcomm may be an even bigger bully than Intel was or is, so I would not look up to Qualcomm to being a savior for the market, but more like the tyrant that replaces the previous tyrant.
OpenPOWER[1] would be a great option. There was a recent crowd-funding effort to try to make an OpenPOWER desktop a reality[2] but unfortunately it didn't get nearly enough funding (though apparently they are still developing it[3]).
That would have been a cool machine, but unfortunately, the price was way outside my budget.
If such a machine with reasonable specs (I do not expect a 64-core 256-GB-RAM-monster) could be brought down to the 1000 $/€ price range, I would seriously look into it.
(I am not sure how realistic that price range is, though.)
What about Loongson? IIRC, Richard Stallmann uses a Notebook based on it, because it has free firmware. Performance is probably not breathtaking, but it exists. Does somebody know if there are Desktop machines built around that chip?
I have to agree. The Talon looked amazing, but the crowdfunding price was way out there.
They should have offered a bare version with just the board, ram and maybe video card (to ensure comparability). They needed the ~$1k USD hobbyist market.
It seems like the goals were way too high, like the Ubuntu Edge.
You can't buy a processor with comparable performance to a modern Intel without some kind of scary management engine. AMD has them too and Arm doesn't compete on performance.
For certain values of "laptop" probably (or "notebook" might be better since I don't know if I'd want to put one on my lap), but I don't think the tradeoffs would be generally worth it right now. As a spec sheet eyeball check, I know there are ultra high performance gaming notebooks that do ridiculous stuff, like Acer makes one IIRC with SLI 1080s and an i7 that is rated to pull up to 330W from the wall, and of course it weighs something like 8-8.5 kg (18-19 lbs). There seem to be ones that "merely" use a single 100-150W desktop class GPU and processor that are more like in the 5.5 kg/12 lbs range, but that's still no fairy (a 15" Macbook Pro for contrast is about 1.8 kg/4 lbs, and even in the mid-90s the big old PowerBooks maxed out around 3.2 kg/7 lbs IIRC) and obviously we're generally going to be talking just a few hours of battery life at best away from mains.
But at any rate it's not unfeasible or unknown right now to deal with 100-200W worth of TDP in big 17" (or even 21"(!!!)) notebooks, and there does seem to be a functional (albeit niche) market for it. So at that range it'd be feasible in principle to stick in a low end POWER8 and smallish but functional GPU and have a "notebook POWER8 system", but it'd be a compromised machine in terms of what we'd normally find desirable in a mobile system.
POWER9 (which I think is still slated to go online in the Summit & Sierra supercomputers this year?) is supposed to have improved energy efficiency and management features, which though aimed at scaleup/scaleout of course might help out a bit in other settings in theory. But even so it'd be a tougher chip to build in an SFF system around let alone a notebook. Any potential buyers would have to care a very great deal about what it brought to the table.
"The general idea is to remove the SPI flash chip from the motherboard, and route
the wiring to one of the external ports, such as either a standard SD or a USB
port, or perhaps even to a custom connector. A Trusted Stick (discussed in the
next chapter) would be then plugged into this port before the platform boots,
and would be delivering all the required firmware requested by the processor, as
well as other firmware and, optionally, all the software for the platform."
It seems to me that there's not much hope for an ME-less x86 compatible machine in the near future. Intel is pretty invested in the product and AMD has introduced a similar solution for their systems.
Another architecture all together, designed from the ground up to support a free and secure system, seems a better bet.
Is this something we could achieve with a corporate alliance? I know a lot of tech companies would like to give their employees secure laptops. I also know that there are large costs associated with making hardware, especially if you are talking about dropping ME.
A dozen companies with 1000 employees each and a budget of $2,500 per employee gets you $30 million, which is surely enough to get a decent, qubes-secure laptop with no ME. You aren't going to be designing your own chips at that point, but you could grab power8 or sparc or arm.
Are there companies that would reasonably be willing to throw in a few million to fund a secure laptop? I imagine at least a few. And maybe we could get a Google or someone to put in $10m plus.
Intel ME is effectively the result of a corporate alliance... large organizations want central control of the computers they give their employees regardless of what that employee, the computer's user, wants.
not exacting. they want disk encryption with a master password fallback and bios tampering detection. then there is nsa and nist and darpa et al which want lots more.
Intel just decided to clump it all together. and it doesn't even fully address the two main corporate requests.
With an alliance you can achieve agreement, not quality. Everybody will say, they agree to a standard, then you get a 200 bullet point document, and 5-10 years later you get expensive, certified solutions that in reality only contain 20 of the 200 points, acting like they contain 180 points, and certainly excluding the important ones because these are hard to solve and expensive to implement.
Quality you can only achieve by possessing the right skills and making the right long term investments.
I would think that one company (Google, Amazon, Facebook, etc..) that cared enough would be better off SOLEY funding a project like this for themselves first - then others second.
$100 Million investment isn't a stretch for something from a large company.
ChromiumOS is what Google bases ChromeOS on, and it's source is available (most notably, the U-Boot and device-specific firmware source is available for all Chromebooks). That's one of the reasons why Chromebooks are so well-supported by coreboot.
"For years, coreboot has been struggling against Intel. Intel has been shown to be extremely uncooperative in general. Many coreboot developers, and companies, have tried to get Intel to cooperate; namely, releasing source code for the firmware components. Even Google, which sells millions of chromebooks (coreboot preinstalled) have been unable to persuade them.
...
Basically, all Intel hardware from year 2010 and beyond will never be supported by libreboot...."
Looks like Qubes make you pay to get certified: https://puri.sm/posts/ "The costs involved, requiring a supplementary technical consulting contract with Qubes/ITL (as per their new Commercial Hardware Goals proposal document), are not financially justifiable for us."
What'd you think? If you hire an expert to inspect your product so you can show a certificate and further sales, it's resonable for the expert to get paid.
A potential alternative is to arrange a back-end revenue share agreement. Either ink an agreement along with Purism's CPA firm, and/or Purism files Form 8821 granting Qubes as a disclosure designee, sharing Purism's tax returns information with Qubes. This helps confirm the shipped units in a relatively neutral manner, and charge based upon a percentage of the shipped units. Not at all ideal, because Qubes has no control over Purism's business and how much they ship, among a host of other reasons, but I'd like to see this security focus broaden, and would regret seeing perfect cost recovery being the enemy of good enough cost recovery sink the hardware certification that helps broaden the security emphasis.
That's a little disappointing. I understand there's financial costs involved in someone certifying the thing, and I understand that consultant should be paid for their work, but for the sake of having a "flagship" laptop that QubesOS can point to--in addition to the fact that the OS would likely gain greater adoption from being a pre-installed option on a laptop--you would think that Qubes might be willing to eat the cost.
If Qubes wants this so bad, they should pay for this themselves, the first couple of laptops. Then if it proves to be a good selling point, others will follow and pay.
> The vendor will also have to be willing to “freeze” the configuration of the laptop for at least one year.
This is one of the most important points. The speed at which laptop vendors are releasing new SKUs is staggering. I know the whole supply chain is to blame, but apart from a few models, the number of different SKUs is way too high.
Simply put, they are trying to hit every gap in the market because their products are what economists like to call fungible. There is simply not enough to distinguish between them.
This is why Apple can sit with just a few models year after year, because they are the sole vendor of OSX/MacOS.
Once more i get the impression that computer security people are off in a different universe where a computer at the bottom of the ocean is a "reasonable" way to do computing.
This made me download Qubes. Amazing project that seems to care.