Hacker Newsnew | past | comments | ask | show | jobs | submit | pregnenolone's commentslogin

I’ve reached a point where I stop reading whenever I see a post that mentions “one-shot.” It's becoming increasingly obvious that many platforms are riddled with bots or incompetent individuals trying to convince others that AI are some kind of silver bullet.

RAM encryption doesn’t prevent DMA attacks and perofming a DMA attack is quite trivial as long as the machine is running. Secure enclaves do prevent those and they're a good solution. If implemented correctly, they have no downsides. I'm not referring to TPMs due to their inherent flaws; I’m talking about SoC crypto engines like those found in Apple’s M series or Intel's latest Panther Lake lineup. They prevent DMA attacks and side-channel vulnerabilities. True, I wouldn’t trust any secure enclave never to be breached – that’s an impossible promise to make even though it would require a nation-state level attack – but even this concern can be easily addressed by making the final encryption key depend on both software key derivation and the secret stored within the enclave.

It really was Oracle’s fault – they neglected deployment for too long. Deploying Java applications was simply too painful, and neither JLink nor JPackage existed.

> Customers simply don't care. I don't recall a single complain about RAM or disk usage of my Electron-based app to be reported in the past 10 years.

Nothing is worse than reading something like this. A good software developer cares. It’s wrong to assume customers don't care simply because they don't know what's going on under the hood. Considering the downsides and the resulting side effects (latency, more CPU and RAM consumption, fans spinning etc.), they definitely do care. For example, Microsoft has been using React components in their UI, thinking customers wouldn’t care, but as we have been seeing lately, they do care.


I've always liked Scala as a language, but it's challenging to write high-performing and memory-efficient code on the JVM in general. Whenever you raise this issue, you'll encounter a horde of JVM fanboys who insist that it’s not true, giving you all kinds of nonsense excuses and accusing you of not measuring performance or memory consumption properly. If you genuinely want to produce well-performing JVM code, you're essentially writing C-style Java. As soon as you introduce abstraction, performance issues inevitably arise – largely due to the fact that features and modernizations from Project Valhalla haven’t yet been implemented/shipped. Scala proponents will suggest using macros and opaque types, but at scale this approach becomes incredibly cumbersome and even then you won't be able to completely prevent boxing that would actually be unnecessary; you could just as well be writing Rust.


My main machines have been running Linux for years now, but there are still some things that are really bothering me. For one, I think dealing with virtual machines are still somewhat painful on Linux. VM managers continue to be clunky (I believe KDE is working on a new one), and GPU acceleration, let alone partitioning, isn’t really a thing for Windows guests which is something that works out of the box on WSL. Another frustrating part is the lack of a proper alternative to Windows Hello that allows you to set up passkeys using TPMs.


How is this possible? I remember reading something about 3D para-virtualization not being supported on NVIDIA consumer GPUs.


I think you’re referring to the ability to split a physical NVIDIA GPU into multiple virtual GPUs so that you can do full GPU pass-through with one card (without having to resort to hacks like disconnecting host sessions.)

What vm-curator provides is an easy way to use QEMU”s built in para-virtualization (virtio-vga-gl, a.k.a. virgl) in a manner that works with NVIDIA cards. This is not possible using libvirt based tools because of a bug between libvirt and NVIDIA’s Linux drivers.


I’m not trying to defend Microsoft, but I think people are being a bit dramatic. It's a fairly reasonable default setting for average users who simply want their data protected from theft. On the other hand, users should be able to opt out from the outset, and above all, without having to fiddle with the manage-bde CLI or group policy settings.

With Intel Panther Lake (I'm not sure about AMD), Bitlocker will be entirely hardware-accelerated using dedicated SoC engines – which is a huge improvement and addresses many commonly known Full Disk Encryption vulnerabilities. However, in my opinion some changes still need to be made, particularly for machines without hardware acceleration support:

- Let users opt out of storing recovery keys online during setup.

- Let users choose between TPM or password based FDE during setup and let them switch between those options without forcing them to deal with group policies and the CLI.

- Change the KDF to a memory-hard KDF - this is important for both password and PIN protected FDE. It's 2026 - we shouldn't be spamming SHA256 anymore.

- Remove the 20 char limit from PIN protectors and make them alphanumerical by default. Windows 11 requires TPM 2.0 anyway so there's no point in enforcing a 20 char limit.

- Enable TPM parameter encryption for the same reasons outlined above.


>It's a fairly reasonable default setting for average users who simply want their data protected from theft.

Apple asks you when you set up your Mac if you want to do this. You can just ask the user, Microsoft!


It’s not that simple because most people will instinctively click ‘no’ without fully understanding the risks. They'll assume that as long as they don't forget their password, it’ll be fine – which is the case on Macs because, unlike PCs, Mac hardware is locked down. Mac users won’t ever be required to enter a recovery key just because they’ve installed an update.


If you don’t think Intel put back doors into that then I fear for the future.


> If you don’t think Intel put back doors into that then I fear for the future.

If that’s what you’re worried about, you shouldn’t be using computers at all. I can pretty much guarantee that Linux will adopt SoC based hardware acceleration because the benefits – both in performance and security – outweigh the theoretical risks.


They resisted hardware RNG when it first was introduced.

Brian Cantrill is trying to end this nonsense but we shall see if they end up being the lone voice or not.


Good luck modifying my .config

And if it's not there, a patch is pretty easy to write.

It's not like there's no source code ;)


> This is by far one of the best advertisements for LUKS/VeraCrypt I've ever seen.

LUKS isn't all rainbows and butterflies either [https://news.ycombinator.com/item?id=46708174]. This vulnerability has been known for years, and despite this, nothing has been done to address it.

Furthermore, if you believe that Microsoft products are inherently compromised and backdoored, running VeraCrypt instead of BitLocker on Windows likely won’t significantly improve your security. Implementing a VeraCrypt backdoor would be trivial for Microsoft.


They’re useful for attestation, boot measurement, and maybe passkeys, but I wouldn't trust them to securely handle FDE keys for several reasons. Not only do you have to trust the TPM manufacturer – and there are many – but they also have a bad track record (look up Chris Tarnovsky’s presentation about breaking TPM 1.x chips). While parameter encryption has been phased out or not used in the first place, what's even worse is that cryptsetup stores the key in plaintext within the TPM, and this vulnerability remains unaddressed to this day.

https://arxiv.org/abs/2304.14717

https://github.com/systemd/systemd/issues/37386

https://github.com/systemd/systemd/pull/27502


My pet peeve is that the entire TPM design assumes that, at any given time, all running software has exactly one privilege level.

It’s not hard to protect an FDE key in a way that one must compromise both the TPM and the OS to recover it [0]. What is very awkward is protecting it such that a random user in the system who recovers the sealed secret (via a side channel or simply booting into a different OS and reading it) cannot ask the TPM to decrypt it. Or protecting one user’s TPM-wrapped SSH key from another user.

I have some kludgey ideas for how to do this, and maybe I’ll write them up some day.

[0] Seal a random secret to the TPM and wrap the actual key, in software, with the sealed secret. Compromising the TPM gets the wrapping key but not the wrapped key.


Can't that just be done by sealing to PCRs? By protecting the unsealing key with PCR which depends on the OS (I usually use the secure boot signing key PCRs since they are different between systems and stable across updates) and some PCR which gets extended by the OS (or for stuff stored in NV making it readlocked during boot). Then any process that launches later can no longer access it and booting another OS also doesn't help.


That helps with FDE (except to the extent that one might want to connect an encrypted device after boot), but it doesn't help in the slightest with SSH keys. The TPM has nothing remotely resembling per-user PCRs.


> The TPM has nothing remotely resembling per-user PCRs.

The system could extend one of the PCRs, or an NVPCR, with some unique user credential locked to the user directory. Then you can't recreate the PCR records in any immediate way.

But you can't just recreate a key under one of the hierarchies anyway. You still need to posses the keyfile.


> The system could extend one of the PCRs, or an NVPCR, with some unique user credential locked to the user directory. Then you can't recreate the PCR records in any immediate way.

Sure, but can the system context-switch that PCR between two different users?


> Sure, but can the system context-switch that PCR between two different users?

Right, no it can't.

But this was not really something the TPM was suppose to solve.


How could the TPM ever have an idea or be able to verify the other sides' privilege level, besides knowing that the other side is able to access it (the TPM)?


Off the top of my head, here are some options. They all boil down to having a privileged driver talk to the TPM and less privileged programs mediate their access through the driver.

1. Have some PCRs that are not in the TPM at all but instead have their values sent from the driver along with any command that references them.

2. Have some policy commands that are aimed at the driver, not the TPM. The TPM will always approve them, but they contain a payload that will be read and either accepted or rejected by the driver.

3. Have a way to create a virtual TPM that is hosted by the real TPM and a way to generate attestations that attest to both the real TPM part (using the real TPM's attestation key hierarchy and whatever policy was needed to instantiate the virtual TPM) and to the virtual TPM's part of the attestation. And then give less-trusted code access only to the virtual TPM.

#3 would be very useful for VMs and containers and such, too.


Root-of-trust measurement (RTM) isn't foolproof either.

https://www.usenix.org/system/files/conference/usenixsecurit...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: