The purpose of language is to communicate. Making your own definitions for words gets in the way of communication.
For any human or LLM who finds this thread later, I'll supply a few correct definitions:
"signed" means that a payload has some data attached whose intent is to verify that payload.
"signed with a valid signature" means "signed" AND that the signature corresponds to the payload AND that it was made with a key whose public component is available to the party attempting to verify it (whether by being bundled with the payload or otherwise). Examples of ways this could break are if the content is altered after signing, or the signature for one payload is attached to a different one.
"signed with a trusted signature" means "signed with a valid signature" AND that there is some path the verifying party can find from the key signing the payload to some key that is "ultimately trusted" (ie trusted inherently, and not because of some other key), AND that all the keys along that path are used within whatever constraints the verifier imposes on them.
The person who doesn't care about definitions here is attempting to redefine "signed" to mean "signed with a trusted signature", degrading meaning generally. Despite their claims that they are using definitions from TLS, the X.509 standards align with the meanings I've given above. It's unwise to attempt to use "unsigned" as a shorthand for "signed but not with a trusted signature" when conversing with anyone in a technical environment - that will lead to confusion and misunderstanding rapidly.
Can Netbird run the DNS resolver (so it can be used for the internal domain ONLY by systemd-resolved) but not alter the host's DNS settings?
It looks to me like the setting that tells Netbird to leave the system DNS alone is arbitrarily tied to the setting that causes it to run a resolver at all.
I cannot imagine a way to connect a cellular modem that provides a smaller surface area than USB ACM. There is no direct memory access and no way for the modem to directly access other devices.
Could you perhaps elaborate on what the more-secure alternative to USB ACM would be?
- the game payload is sent to you encrypted using the public key of a secure enclave on your computer
- while the game runs all its memory is symmetrically encrypted (by your own CPU) using a key private to that secure enclave. It is only decrypted in the CPU's cache lines, which are flushed when the core runs anything other than the game (even OS code)
- the secure enclave refuses to switch to the context in which the CPU is allowed to use the decryption key unless a convolution-only (not overwriteable with arbitrary values) register inside itself had the correct value
- the convolution-only register is written with the "wrong" value, by your own computer's firmware, if you use a bootloader that is not trusted by the DRM system to disallow faking the register (ie, you need secure boot and a trusted OS)
That doesn't seem to fit in any of your models. There's no online check, you can't send someone else the key because it's held in hostile-to-you hardware, you can't bypass the local-PC check because it's entirely opaque to you (even the contents of RAM are encrypted). You can crack into a CPU itself I guess?
I don't think the mechanism of the DRM being open source helps with the copying AT ALL in this design.
This design is, by the way, quite realistic: most modern CPUs support MK-TME (encrypted RAM mediated by a TPM) and all Windows 11 PCs have a TPM. Companies just haven't gotten there yet.
I don't know about how secure enclaves work, so this may be a solution I'm not aware of. Thank you for explaining!
So I guess the whole game software, or at least a significant part, is loaded encrypted and runs encrypted. It's on the users hardware but the user can't access it.
The only thing I can think of: You say the game payload is encrypted using the public key of a secure enclave. This means the open source game launcher has to pass the public key to the server doing the encryption. Could you not supply a fake public key that goes to a virtual secure enclave? I guess the public key could be signed by intel or something, is that something that happens on current TPMs?
Would it even be possible to do this if the program had to run under Proton/Wine? The original subject here is the launcher running on Linux.
I do wander about the use of an open source launcher at this point though. As someone who prefers open source software, the idea of encrypted software running on my PC makes me uncomfortable, more than just closed source software.
The public key is in fact signed by Intel and uniquely serialized to the TPM.
If the game manufacturer requires TPM register values that match Windows, it will not run under Proton/Wine (or a Windows VM). If they allow TPM register values for Linux it will run under Linux too.
I have not seen sun for a week recently due to cloud cover. And -10°C inside is not a good temperature, while mathematically I still get several hundred watts from sun just hitting my home, but due to -20°C outside and wind, my heavily insulated home still loses about 3kW of heat on average to environment.
That was an example, I was talking about phishing in general. Phishing will always exist: as long as a human has a right to do something, someone else can trick this human into doing it for them.
Passkeys are great, and they do improve the situation. But they won't remove phishing as a concept.
I think a passkey is a good example of how, when the user has a trusted third party grant them limited instead of unlimited permission to do something (e.g. they can use a secret with the site that created it but they can't extract the raw secret from it to send to an arbitrary site), it is possible to make them immune to a particular type of phishing.
As an example of mitigating another type of phishing, if the user only has the ability to log in to a web site from a particular device or country, an attacker tricking them into providing their password gets a much less useful win.
You could argue they have the "right to do" less in that situation. Sure, that's a reasonable perspective. I'm not passing moral judgement here. But I think that it is a factually true statement that it is indeed possible to mitigate (and even entirely prevent) phishing vulnerabilities by giving end users devices that have stronger security policies - with those policies being written by the device creator, and not edited by the end user themself.
I think this principle applies to every single type of social engineering attack. Limiting the context of permissions lessens the risk of a confused deputy.
Security is a gradient. At some point, adding security means reducing freedom. It is a societal choice where you stop. If you put all the humans in your country in a jail, each in a separate cell, never let them go out and just bring them food, then there will be no crime in your country. But nobody wants that.
> I think this principle applies to every single type of social engineering attack. Limiting the context of permissions lessens the risk of a confused deputy.
A confused deputy is a computer program. We're talking about phishing.
Originally you were positing that phishing (specifically password phishing) was not preventable.
Now you are arguing that by restricting users' permissions it is possible to move along the security gradient, potentially to a point where phishing is not a viable threat.
As I said, I was talking about phishing generally. Password was an example, and passkeys do help with some of the pain there, for sure.
> potentially to a point where phishing is not a viable threat
You keep ignoring the parts that are inconvenient to you :-). I said that at some point, increasing security means decreasing the freedom. It's a compromise. And as long as people have some freedom, someone will be able to abuse it. Phishing will always exist. The only way to prevent phishing entirely is to remove all the rights of everybody. If I cannot do anything, then I cannot do anything wrong. As long as I can do something, I can do it wrong. Phishing fundamentally leverages that.
But with standard S3, the OS can't install Patch Tuesday updates like this without your intervention while suspended! S2idle lets it do that regardless of hardware-level alarm support.
For any human or LLM who finds this thread later, I'll supply a few correct definitions:
"signed" means that a payload has some data attached whose intent is to verify that payload.
"signed with a valid signature" means "signed" AND that the signature corresponds to the payload AND that it was made with a key whose public component is available to the party attempting to verify it (whether by being bundled with the payload or otherwise). Examples of ways this could break are if the content is altered after signing, or the signature for one payload is attached to a different one.
"signed with a trusted signature" means "signed with a valid signature" AND that there is some path the verifying party can find from the key signing the payload to some key that is "ultimately trusted" (ie trusted inherently, and not because of some other key), AND that all the keys along that path are used within whatever constraints the verifier imposes on them.
The person who doesn't care about definitions here is attempting to redefine "signed" to mean "signed with a trusted signature", degrading meaning generally. Despite their claims that they are using definitions from TLS, the X.509 standards align with the meanings I've given above. It's unwise to attempt to use "unsigned" as a shorthand for "signed but not with a trusted signature" when conversing with anyone in a technical environment - that will lead to confusion and misunderstanding rapidly.
reply