I used to run my personal website on Hugo but after a few years I wanted to upgrade the Hugo version and was suddenly out in the cold - there was no real path to upgrade, everything only works together once and as soon as I started upgrading, there was mismatch between the generator, the theme, and whatever widgets I used.
Moved to wordpress.com since that. No more worry about keeping things working, I can focus on the content. Admittedly, the horrible load times of wordpress.com sites are causing me to look at alternatives - waiting 5 seconds for the homepage to show up is not really acceptable.
I wish someone made a hosted version of a static site generator - they maintain the compatibility between individual components, provide some online editor for content, but the output is just a bunch of static files generated from this. Have not found one so far but if you know one, please drop a line!
> Admittedly, the horrible load times of wordpress.com sites are causing me to look at alternatives - waiting 5 seconds for the homepage to show up is not really acceptable.
It shouldn't be that slow. Did you enable Jetpack caching and such?
WordPress sites can be lightning fast if cached well. I forget what the WordPress.com options are, but if you host on Pantheon, Wpengine or similar, it can be very very fast.
You can also self-host on Cloudways (managed WordPress containers), now owned by DigitalOcean, or use Gridpane to deploy it to any VPS with a dashboard.
> I wish someone made a hosted version of a static site generator - they maintain the compatibility between individual components, provide some online editor for content, but the output is just a bunch of static files
(Disclaimer: I work for one) Headless CMSes can often do this, but usually you have to bring your own frontend. Astro makes this setup pretty easy to maintain (it all works together and is maintained by a single vendor). Of course Next would work too but it's much more complicated. You'd still have to manage the frontend though :(
There might be a static generation plug in for WordPress too. I can't remember exactly.
What are the differences in the design principles of the AWS Rust SDK compared to AWS SDKs of other languages? In what ways is it special to work best with the Rust ecosystem?
Probably the biggest one is "batteries included but replaceable." The Rust ecosystem is still maturing, so we did a lot of work to make reasonable default choices but still allow customers to make different ones.
What does your heart tell you? Palladium[1] came and went and then suddenly most laptops and mobile devices have a built-in TPM today. No doubt history will repeat.
It's a place that applications can store such data without my knowledge or control, and I don't trust applications enough to be comfortable with them having that ability.
Don't get me wrong, it's not a major issue for me, it's just uncomfortable. It just means I prefer my machines to not have TPM hardware in them.
They can store that data, but they cannot retrieve that data. That's because the data it stores are cryptographic secrets (private keys). If they store a private key there and then delegate encryption/decryption to the TPM, you can also ask the TPM to perform said encryption/decryption using that key as the system owner.
The entire point of a TPM is ensuring that private keys intended for a specific device are never leakable off of that device.
Now that being said, there is an additional function of TPMs that is more controversial, and that's how it can be used by the CPU and firmware to refuse to execute code when a chain of attestation coming from a root key stored in the TPM is not satisfied. That controversy is very valid for TPMs or other "enclave" devices which do not allow the system owner to change those root keys. And of course there is the extended ability to leverage this attestation over a network, to allow a _server_ to be able to refuse service if the attestation is not valid.
When the user can change the root attestation keys, I think local attestation is a net positive for the security of the user. When they cannot, it means that only the "blessed" builds from the hardware manufacturer can run. This second case should be made illegal in my opinion.
Though there's nuance here, remote attestation however is a net negative for the user. Taken to it's logical conclusion where unattested access is 100% refused without exceptions, it means that the user effectively cannot run their own software on devices that they own, and that is not acceptable. It also ensures that the user can only use hardware devices that the service provider deems as allowed, which is the more practical and likely outcome at scale.
Remote attestation is what's at issue with WEI (and indeed things like Google Play Integrity and the equivalent feature of Apple's iOS stack), not the ability to ensure that private keys cannot be leaked.
> They can store that data, but they cannot retrieve that data.
Right, which means software can engage in encryption that I can't decrypt because I can't get the keys.
You're right, RA (when the user can't change the keys) is a much more concerning thing. It can be used to prevent me from exerting full control over my own hardware.
My problem with TPM isn't really the TPM itself, it's that I have very little trust in software and so want to be able to keep a close eye on it and audit things as needed. I want to be able to do things like decrypt data streams sent over the wire, etc.
And, as I said, this is a relatively minor thing for me. Even writing as much about it as I have puts more emphasis on it than I would prefer. In practice, the majority of the software that I use doesn't even want to use the TPM, so it's all good.
I don't disagree, but how do you feel about you (the machine owner) also not having access to it?
That's my major problem with it; it locks you out of messing with your own machine data, which you can see being instantly abused by third parties to prevent modifications.
> That's my major problem with it; it locks you out of messing with your own machine data, which you can see being instantly abused by third parties to prevent modifications.
It locks everybody, including the owner, out of any data it doesn't own. That's the point. If you can pull it out, so can anybody else, and you've just made a small hard drive. Could it be used by vendors for DRM-like things? Sure. That's on the vendor, though, and not the technology itself.
TPM chips are pretty open. I had a look through the spec & API for tpm 2.0 a few years ago and there’s a lot of neat tricks you can do with them. TPM chips are an open standard with many implementations.
As far as I can tell, as a software developer you have full access to the chip. The only thing you can’t do with them (by design) is read the signing keys or generate secure boot attestations for machines which didn’t secure boot. I think you can even replace the signing keys entirely if you want to.
They aren’t a hard drive. They don’t store your data. And unfortunately I don’t think they’ll do much to prevent software bugs from causing problems. Particularly in the operating system, where software bugs can undermine the entire chain of trust model.
Don’t get me wrong; the idea of getting my computer to cryptographically prove it’s running in some locked down Xbox mode to be allowed to play Netflix or do online banking is quite the ask. The hackability of computers is one of their best features and I don’t want that genie to go back in the bottle.
But every time the conversation comes up there’s so much misinformation about them. People conflate tpm chips with intel’s management engine (which is secret and closed source), Apple’s secure enclosure (which I think can store some data?) and other stuff that works really differently.
Doubtful. TPM chips come pre loaded with signing keys from the manufacturer. That allows 3rd parties to verify that an attestation made by your TPM is genuine. (They can do that by checking signatures all the way back to the manufacturer’s public cert).
If you replace the manufacturer’s signing keys with some keys you generated yourself, the only real effect is that your computer can no longer do remote attestations. So you can no longer convince any 3rd parties that your computer is operating in a “secure” mode.
That feels wrong in some ways but it’s also the only way you can trust used hardware, or anything which has been compromised. I do get considerable value out of the resale value for my stolen Apple devices being much lower, and that’s probably a higher risk for most people.
The problem isn't the cryptography, who's using it and for what. There's nothing wrong with it if we're using it to empower and secure ourselves. There's everything wrong with it if it's some corporation using it to protect themselves from us, the owners of the machines. The former is just normal user activity. The latter means our computers are not really ours, they come pwned straight off the factory.
If you use a mobile device maybe. My desktop machine has a TPM and AFAIK I do have access to load my own keys / replace the root keys. Of course, nothing says there isn't a backdoor within the TPM, but it's not this secret locked down thing.
It's unlikely that there is a backdoor on the TPM itself. The more likely scenario is that given a TPM serial number or EKpub the vendor could furnish a seed in response to a subpoena or warrant -- however, even this is unlikely, as it would make TPM vendors huge targets for hacking. Also TPM vendors make a big deal of how they don't keep TPMs' seeds, and I tend to believe them, because again if they did keep them then they'd be huge targets.
- set passwords on the key hierarchies
- roll the seeds for the key hierarchies,
thus invalidating *all* keys on the TPM
Now, Windows might stop working if you do that, and naturally, if you wanted to use a TPM for locking your filesystems then you'll need to do this _before_ you install your OS.
Also, once you change the seed for the Endorsement Key hierarchy you'll lose the ability to prove that the TPM is a legit TPM made by whatever legit TPM vendor.
So sure, this is only something you do if you know what you're doing, especially if the TPM is soldered onto the motherboard.
> One which you, as the owner, don't have the keys to.
One which nobody, not even the owner, can extract keys from. I don't understand why people don't like the fact that they can't pull keys out of the TPM. If you, the owner, can pull them so can anybody else. I know TPMs aren't invulnerable but you have to admit they significantly raise the bar of compromise.
Don’t forget that the anti-TPM stuff comes from the guy, RMS, who opposes “sudo” because it serves to let a machine owner control and audit use of super-user commands, whereas just having a root password shared by multiple users gives anyone who learns the password the freedom to do whatever they want with plausible deniability. He has a very strange and quaint way of thinking but people uncritically parrot him without appreciating what his world-view actually entails.
For those that would appreciate context: Stallman did say this but the incident that he cites as justification happened _four decades ago_ and he wrote about it in 2002 [1]:
> Sometimes a few of the users try to hold total power over all the rest. For example, in 1984, a few users at the MIT AI lab decided to seize power by changing the operator password on the Twenex system and keeping it secret from everyone else. (I was able to thwart this coup and give power back to the users by patching the kernel, but I wouldn't know how to do that in Unix.)
> However, occasionally the rulers do tell someone. Under the usual su mechanism, once someone learns the root password who sympathizes with the ordinary users, he or she can tell the rest. The "wheel group" feature would make this impossible, and thus cement the power of the rulers.
> I'm on the side of the masses, not that of the rulers. If you are used to supporting the bosses and sysadmins in whatever they do, you might find this idea strange at first.
He was talking about a time-sharing system in an academic context. We have no idea what his thoughts are now, and it's logically fallacious to discount his feelings on what multinational corporations bake into their silicon on the basis of an experience that he had back when Van Halen was still topping the charts. It isn't exactly a secret that RMS is a bit "out there" - lots of historically-significant people are. Contextualizing their work and speech in a constructive way is preferable to writing them off wholesale.
Given that RMS has a pretty good track record or predicting the kinds of abuses that we're seeing today, it seems like a good idea to at least pay attention to his ideas and not dismiss them out of hand.
Yes and according to discussions at that time Palladium would be always on, on all PCs and it would banish Linux from all PCs making Windows the some possible OS..
and it would banish Linux from all PCs making Windows the some possible OS
We're getting closer to that with things like "secure" boot. Fortunately that can still be disabled, but MS even required that on ARM platforms it can't. The bigger Linux distros have bent over and gotten MS to sign their bootloaders, essentially making them at the mercy of MS.
Back when EFI consortium wanted to make Secure Boot always on, it wasn't even clear if ARM is going to win in mobile market, let alone PC/server one.
Nowadays all non-mobile aarch64 devices I used, and even many mobile ones, let you boot your own unsigned kernel. Arm's SBBR only states that IF you implement Secure Boot and TPM support in your EFI firmware (you don't have to), it has to comply with certain rules. Nothing about preventing users from disabling it. (https://documentation-service.arm.com/static/5fb7e66fd77dd80...)
I’ve complained about this before, but I’ve been hearing “Microsoft is going to block you from installing Linux!” since like 2004, when it was a reliable way to get an easy “+5 Insightful” on Slashdot. It hasn’t happened, even on Microsoft’s own first-party computers.
At this point I think it’s firmly FUD and the people who say it’s coming any second now need to put up the evidence. Microsoft doesn’t seem to care, especially now that Windows is an afterthought to Azure, O365, etc.
If you keep track of the changes to the BIOS firmware, you can see the changes. Their minuscule but happening. We don't have full blow preventing from disabling secure boot yet, but it appears to me that's were this is going. (Disabling usb ports, having keys that prevent disabling Secure boot unless you clear them or change them. All it takes is some event to bring these companies over the edge. The Asus MB development relies totally on Microsoft's decisions about this.
I think the point, at least for me, is that they shouldn't be taking away any user control for consumer products. And yet that is what we have let them do. Its not going to stop.
> > I’ve complained about this before, but I’ve been hearing “Microsoft is going to block you from installing Linux!” since like 2004 [...]
> If you keep track of the changes to the BIOS firmware, you can see the changes. Their minuscule but happening. We don't have full blow preventing from disabling secure boot yet, but it appears to me that's were this is going.
Case in point: until recently, even with SecureBoot enabled by default, you could boot Linux distributions which have their bootloader signed by Microsoft, without going into the firmware setup screen. Nowadays, at least with some Lenovo models, you have to go to the firmware setup screen, and either enable a cryptically named option or disable SecureBoot. A quick web search gave me https://www.omglinux.com/boot-linux-modern-lenovo-thinkpads-... which has a screenshot, and which mentions that this is a new Microsoft requirement (instead of something Lenovo came up with).
Agreed, I find myself having to think orthogonally to common sense whenever I try to use one of its SDKs. Nothing works the way you expect it to, everything has 3 layers of unnecessary abstraction and needs to be approached via the back door. Many features have caveats about when it works, where it works, how much it works, during what phase of the moon it works and how long your strings can be when Jupiter is visible in the sky.
That said, if we disregard the leaky SDK APIs and half-implemented everything, it does somewhat deliver on the pluggability promise. Before OTel, you had bespoke stacks for everything. Now there is some commonality - you can plug in different logging backends to one standard SDK and expect it to more or less work. Yes, it works less well than a vertically integrated stack but this is still something. It enables competition and evolution piece by piece, without having to replace an observability stack outright (never going to be a convincing proposition).
So while the developer experience is pretty unpleasant and I am also disappointed with the actual daily usage, from an architectural perspective it opens up new opportunities that did not exist before. It is at least a partial win.
While some of the SO practices can feel dumb, I wonder what is the tradeoff we are making here? Could it be that for whatever reason (e.g. personality) we might benefit from accepting these "dumb" choices because it also brings with it unrelated benefits? For example, might it be that if they were forced to accept "frivolous" statements such as "thank you" (from the example you gave), might it cause moderators to not moderate, and thereby allow in spam & other nasty bits?
It is worth bearing in mind that if we, "good people", complain about moderation, we only see the parts of it that touch our "good posting". There might be plenty of good that moderators are also doing, which only the bad guys see.
What is the state of the art for doing encryption with ECC? The author just says "use NaCl" here but what should I do if I am not in a position to do that but can still use ECC?
My understanding of ECC is that it is not really suitable for encryption as-is, as RSA was, rather it is used for key agreement (somehow through a multi-step process that I do not understand). But it is unclear how much of this is just rumor and implementation limitations.
If you can't use NaCl directly you may still be able to use the underlying "25519" Edwards curve. The point is that it was designed in such a way to make implementation bugs ("bad" points, separate addition/doubling formulas, and other edge cases) either non-existent or at least easy to deal with.
In contrast, ECDSA seems like it was almost designed by the NSA to make it as easy as possible to accidentally introduce an exploitable implementation bug.
You are right that ECC is mainly a key agreement/transport and signing tool, not to be used directly for encryption except in very special cases (e.g. modified ElGamal for verifiable voting schemes).
> The author just says "use NaCl" here but what should I do if I am not in a position to do that but can still use ECC?
Not being in a position to use even a single-file C library like Monocypher (well, 2 compilation units if you want the optional parts), is… well, unusual.
> My understanding of ECC is that it is not really suitable for encryption as-is, as RSA was, rather it is used for key agreement (somehow through a multi-step process that I do not understand)
The steps are: once you’ve done key agreement, you have a shared key. You can then use authenticated encryption with that key. One caveat though is that key agreement often don’t give you an actual key, but a statistically biased shared secret. So the actual steps are:
1. Do key agreement. You now have a shared secret.
2. Hash your shared secret. You now have a key.
3. Encrypt your messages with your key. Use AEAD for this.
Caveat: I omitted a number of important details, most notably forward secrecy.
The author definitely should have clarified this. The standard is to use ECC for key exchange only. This can be done entirely offline - each party chooses a random secret scalar, and multiplies the base point of the curve by that scalar to produce a public point. You publish your public point in advance of communication. When you want to send a message, you multiply the other party’s public point by your secret scalar to obtain a shared key. Then, just use a well-studied symmetric AEAD construction to encrypt messages.
Of course, this doesn’t incorporate any forward secrecy, which is a key benefit of using something like TLS or Noise rather than rolling your own custom protocol.
Moved to wordpress.com since that. No more worry about keeping things working, I can focus on the content. Admittedly, the horrible load times of wordpress.com sites are causing me to look at alternatives - waiting 5 seconds for the homepage to show up is not really acceptable.
I wish someone made a hosted version of a static site generator - they maintain the compatibility between individual components, provide some online editor for content, but the output is just a bunch of static files generated from this. Have not found one so far but if you know one, please drop a line!