Legally, this isn't sufficient. Your subscription contract is independent of your payment method. If you don't pay, that doesn't necessarily mean that your subscription is cancelled, and you could end up in court and lose.
What is necessary is regulatory (or statutory) enforcement of easy, online notice of cancellation, without a company able to frustrate you giving them (and them recording and acknowledging) that notice.
> And if a quorum of mining pools gets together, they can fork the blockchain or do all sorts of other shit.
This is a common misunderstanding. Miners do not have power in the system. Anyone can fork Bitcoin, but when people want payment in Bitcoin, they won't accept ForkedBitcoin regardless of the number of miners shilling it. Meanwhile mining money will be left on the table for anyone who wants to enter mining of the original Bitcoin and at greater mining profit.
Semantics make hard assertions about "containers" worthless. It depends on what one means by a container exactly, since Linux has no such concept and our ecosystem doesn't have a strict definition.
By default no copying will be done. While `notes.rewrite.amend` and `notes.rewrite.rebase` are true you also need `notes.rewriteRef` which tells it what notes refs should be rewritten. And it has no value by default. (you can set it to a glob copy over all notes refs)
I'm not sure what this "breaks". Unless a site requires attestation and validates that attestation, a bad software FIDO2 implementation will leave users vulnerable should they choose to use one.
If anything I'm worried that corporate security people will hear of "attacks" like this and blindly add "must use attestation with passkeys" to their checklists, and desktop computing will end up in the same state as mobile, where you have to have an unmodified OS install from one of a handful of authorized fiefdoms to log into your bank. It's a long way off, due to the amount of old laptops with no TPM about, but a plausible future
Edit: I may be misunderstanding the scope of attestation in a FIDO/Webauthn context. Is it a legitimate concern that it would lock out other OSes, or would you simply need a hardware key (or perhaps a TPM), but could run what you want above it?
> add "must use attestation with passkeys" to their checklists
We already do. Mostly from the compliance side: I can't call passkeys "phishing-resistant" unless I can lock them down into unexportable passkey providers only. Some more details from a previous comment of mine:
From a corporate compliance perspective, I need to ensure that employee keys are stored in a FIPS-compliant TPM and unexportable. Key loss is not an issue because we have, ya know, an IT department. The only way I can ensure this happens is by whitelisting AAGUIDs and enforcing attestation.
With these factors I can also get out of the MFA hellhole (because I can prove that whitelisted vendor X already performs MFA on-device without me having to manage it: for example, WHFB requires something you have (keys in your TPM) and either something you are (face scan / fingerprint) or something you know (PIN), without me having to store/verify any of those factors or otherwise manage them). Same goes for passkeys stored in MS Authenticator on iOS/Android.
>I can't call passkeys "phishing-resistant" unless I can lock them down into unexportable passkey providers only
I don't think this is accurate. As far as I know, no credential managers (except for maybe KeePassX) allow export of passkeys, and will instead only allow for secure transfer via the new Credential Exchange Protocol.
> secure transfer via the new Credential Exchange Protocol
If it's "transferable", it's not phishing-resistant (ie it's possible for a user to get bamboozled into transferring their keys to a bad actor), right? Regardless of mechanism.
You might've missed the "FIPS" part as well. This requirement effectively means the keys (or the keys to decrypt the keys) must be stored in a tamper-resistant hardware crypto device (read: your TPM) and basically no credential managers (apart from the first-party ones we have whitelisted) use the TPM for storing your keys.
> It's a long way off, due to the amount of old laptops with no TPM about, but a plausible future
TPMs can't create hardware-attested passkeys, at least they couldn't do that with the TPM 2.0 spec.
And you can just use a USB hardware token to get attested keys. Or you can use WebAuthn over Bluetooth to your phone, essentially using your phone's secure enclave (or its equivalent) as the key source.
Being able to require attested passkeys is a _good_ thing.
Except it means backing up or moving your credentials is somewhere between a pain and infeasible, and you're requiring people to go buy another device for little to no real security benefit. Every browser already generates strong random passwords that are tied to specific sites. They've done so for many years. Passkey attestation in a non-managed-org context is trying to solve a problem that's way past the point of diminishing returns while making things more fragile for users. You also can't really do attestation without having a blessed set of vendors (otherwise a software implementation could do it), so lockin is required.
> Except it means backing up or moving your credentials is somewhere between a pain and infeasible
That's the point.
> and you're requiring people to go buy another device for little to no real security benefit.
No. The benefit is clearly there: hardware-originated keys can not be stolen under any normal circumstances. Meanwhile, synced passkeys are just fancy login/password pairs, so they can be exfiltrated by an attacker. E.g. by scanning the RAM of the passkey manager.
Of course, the operating system can try to add additional barriers, but the underlying keys must at some point be in clear text form.
Right, that makes such a system unusable for normal people, so it is not a good thing to force it upon them. The benefit is not clearly there because anything that can manipulate local memory can also just use the key directly, or if there's some kind of physical button press required, wait for the user to log in and then do whatever they want with the session cookie or alter page contents or do anything else it wants. If the token doesn't display what it's authorizing (e.g. a yubikey), you could also watch for any usage, block that request to the device, and instead auth against their bank. If you need multiple button presses (e.g. they need to press again to confirm a transfer), say there was an error and ask them to try again.
Normal people are however not concerned with these Mission Impossible scenarios, and random passwords are good enough while being easy to use without an IT department to fix when it goes wrong. A password manager (which every browser has built in) already associates passwords to domains for phishing resistance. Users already should never need to enter a password manually unless the site did something stupid to try to block the password manager from working.
> Right, that makes such a system unusable for normal people, so it is not a good thing to force it upon them.
Whut? Passkeys work perfectly fine for "normal people".
> The benefit is not clearly there because anything that can manipulate local memory can also just use the key directly
Correct. But it does require fairly high level of system access. Hardware-bound keys also allow full hardware-attested authentication.
> Normal people are however not concerned with these Mission Impossible scenarios, and random passwords are good enough while being easy to use without an IT department to fix when it goes wrong.
If you're using truly random passwords, then you're using a password manager. And if you're using a password manager, then why not just use passkeys?
All the popular password managers support them: BitWarden, 1Pass, iCloud Keychain, even LastPass.
Passkeys don't offer anything above random passwords, and hardware attested passkeys obviously cannot work with a software password manager, which is the point.
Also like I keep saying, every browser already has a password manager. You don't need an external one. Notably though, Firefox's password manager doesn't support software passkeys, so they are completely unusable for me, for example. I'm certainly not going to sign up for some SaaS so I can use a worse version of passwords.
> synced passkeys are just fancy login/password pairs, so they can be exfiltrated by an attacker. E.g. by scanning the RAM of the passkey manager.
That’s an overly reductionist view.
Lots of password compromises still happen due to credential reuse across services, server-side compromises paired with brute-force-able passwords, and phishing/MITM attacks, and software-based WebAuthN credentials prevent all of these.
The spec tells websites to please not do validation on specific hardware. You can do a light form of remote attestation, but you'll have to convince the user to use passkeys only and not some kind of username+password backup, which is still a tough sell.
If you want remote attestation, Safari already has it, but I'm not sure if their attempt at standardising is going anywhere. It's been a while since the draft got updated or mentioned anywhere.
> If you want remote attestation, Safari already has it
No, Safari/Apple gave up on remote attestation when they introduced passkeys, except for MDM devices where the MDM profile can allow attestation for RP domains on an opt-in basis.
>except for MDM devices where the MDM profile can allow attestation for RP domains on an opt-in basis.
And even then, the attestation you get in that scenario is just an attestation that the passkey was created on a managed device. It is not a hardware/device attestation.
If this "attack" convinces security people of anything that just thinking through the FIDO/WebAuthN specs and threat model didn't already, I'd be concerned about the general quality of their work.
It is still almost always unwanted garbage though, which is why the specification says please don't.
We know from the Web PKI how this goes. People who have no idea what they're doing copy-paste a "good" list in 2025, but in 2035 the fact a third of vendors on that "good" list are now known villains and another third went bankrupt doesn't change the problem that stuff on the list works and everything else doesn't, because it was mindlessly copy-pasted
Narrowly, the vendor attestation could make sense if you're BigBank and you bought every employee (or maybe every customer, I wish) a FooCo security key, you can require FooCo security keys. If you're big enough you could have FooCo make you a custom model (maybe with your company logo) and attest every key is the custom model. I expect Yubico would sell you that product, if you were willing to commit to the volume. You get assurance of the exact hardware properties, and you retain the option to shop around by updating your software to trust different attestation. IMO not worth standardizing, but people really wanted this and better it exists but isn't used than it doesn't exist so they walk away.
The commenters below are confusing two things - Rust binaries can be dynamically linked, but because Rust doesn’t have a stable ABI you can’t do this across compiler versions the way you would with C. So in practice, everything is statically linked.
Rust's stable ABI is the C ABI. So you absolutely can dynamically link a Rust-written binary and/or a Rust-written shared library, but the interface has to be pure C. (This also gives you free FFI to most other programming languages.) You can use lightweight statically-linked wrappers to convert between Rust and C interfaces on either side and preserve some practical safety.
> but the interface has to be pure C. (This also gives you free FFI to most other programming languages.)
Easy, not free. In many languages, extra work is needed to provide a C interface. Strings may have to be converted to zero terminated byte arrays, memory that can be garbage collected may have to be locked, structs may mean having to be converted to C struct layout, etc.
A culture isse, as in the C++ world, of Apple and Microsoft ecosystems, shipping binary C++ libraries is a common business, even it is compiler version dependent.
This is why Apple made such a big point of having a better ABI approach on Swift, after their experience with C++ and Objective-C.
While on Microsoft side, you will notice that all talks from Victor Ciura on Rust conferences have dealing with ABI as one of the key points Microsoft is dealing with in the context of Rust adoption.
COM is actually good though. Or if you want another object system, you can go with GObject, which works fine with Rust, C-+, Python, JavaScript, and tons of other things.
Plenty do not, especially on Apple and Microsoft platforms because they always favoured other approaches to bare bones UNIX support on their dynamic linkers, and C++ compilers.
Static linking doesn't produce smaller binaries. You are literally adding the symbols from a library into your executable rather than simply mentioning them and letting the dynamic linker figure out how to map those symbols at runtime.
The sum size of a dynamic binary plus the dynamic libraries may be larger than one static linked binary, but whether that holds for more static binaries (2, 3, or 100s) depends on the surface area your application uses of those libraries. It's relatively common to see certain large libraries only dynamically linked, with the build going to great lengths to build certain libraries as shared objects with the executables linking them using a location-relative RPATH (using the $ORIGIN feature) to avoid the extra binary size bloat over large sets of binaries.
Static linking does produce smaller binaries when you bundle dependencies. You're conflating two things - static vs dynamic linking, and bundled vs shared dependencies.
They are often conflated because you can't have shared dependencies with static linking, and bundling dynamically linked libraries is uncommon in FOSS Linux software. It's very common on Windows or with commercial software on Linux though.
You know how the page cache works? Static linking makes it not work. So 3000 processes won't share the same pages for the libc but will have to load it 3000 times.
I wonder what happens in the minds of people who just flatly contradict reality. Are they expecting others to go "OK, I guess you must be correct and the universe is wrong"? Are they just trying to devalue the entire concept of truth?
[In case anybody is confused by your utterance, yes of course this works in Rust]
That would have been a good post if you'd stopped at the first paragraph.
Your second paragraph is either a meaningless observation on the difference between static and dynamic linking or also incorrect. Not sure what your intent was.
Go may or may not do that on Linux depending what you import. If you call things from `os/user` for example, you'll get a dynamically linked binary unless you build with `-tags osusergo`. A similar case exists for `net`.
Kind of off-topic. But yeah it's a good idea for operating systems to guarantee the provision of very commonly used libraries (libc for example) so that they can be shared.
Mac does this, and Windows pretty much does it too. There was an attempt to do this on Linux with the Linux Standard Base, but it never really worked and they gave up years ago. So on Linux if you want a truly portable application you can pretty much only rely on the system providing very old versions of glibc.
It's hardly a fair comparison with old linux distros when osx certainly will not run anything old… remember they dropped rosetta, rosetta2, 32bit support, opengl… (list continues).
And I don't think you can expect windows xp to run binaries for windows 11 either.
So I don't understand why you think this is perfectly reasonable to expect on linux, when no other OS has ever supported it.
Static linking produces huge binaries, it lets you do LTO but the amount of optimisation you can actually do is limited by your RAM. Static linking also causes the entire archive to need constant rebuilds.
You don't need LTO to trim static binaries (though LTO will do it), `-ffunction-sections -fdata-sections` in compiler flags combined with `--gc-section` (or equivalent) in linker flags will do it.
This way you can get small binaries with readable assembly.
Rust cannot dynamic link to rust. It can dynamic link to C and be dynamicly linked by C - if you combine the two you can cheat but it is still C that you are dealing with not rust even if rust is on both sides.
Rust can absolutely link to Rust libraries dynamically. There is no stable ABI, so it has to be the same compiler version, but it will still be dynamically linked.
You can use dynamic linking in Rust with C ABI. Which means going through `unsafe` keyword - also known as 'trust me bro'. Static linking directly to Rust source means it is checked by compiler so there is no need for unsafe.
Here in the UK, there is no "legal name" AIUI - just what I am known as that then ends up in my official government documents such as my passport and my driving licence. If I make up a symbol to represent myself, then where are we with GDPR compliance if I demand that symbol be on my official documents?
It seems reasonable to limit compliance to some specific character set. Which character set should it be? Just one that can accurately encode all "official" languages in the region?
There's a reason the court bothered in this case. The bank (ING if I recall) in question has been promising to fix these issues for years because someone decided they could "just" migrate to a new system and all the legacy problems would be a thing of the past. Deadlines were missed, repeatedly, and the whole process has been a disaster outside of the name thing.
Furthermore, this happened in Belgium, a country with at least three official languages and with enough friction between different groups that there is a legal requirement for law enforcement to talk to you in your own language (i.e. Dutch in the Francophone area and French in the Dutch-speaking area).
Also, I think GDPRhub has the most apt take of the whole situation:
> A correctly functioning banking institution may be expected to have computing systems that meet current standards, including the right to correct spelling of people's names.
Honestly, it's ridiculous that a bank can even operate a country without being able to store common names. The banking system isn't from the 70s either, it was deployed in the mid nineties, two years after UTF-8 came out, and six years after UCS-2 came out.
If I start a bank in the UK and I my system can't render the letter "f", I expect someone to speak up and declare how ridiculous that is. This is no different.
What the 'reasonable' character set for names should be is more a philosophical question than a technical one. I think the unicode set of characters fully contains it. But would someone declaring their name is a unicode emoji be 'reasonable'? I think it would not be. Perhaps only certain script ranges within unicode?
> normative subset of Unicode Latin characters, sequences of base characters and diacritic signs, and special characters for use in names of persons, legal entities, products, addresses etc
Once you start taxatively naming "these are the Only Blessed Ranges," you'll be bitten by the usual brouhaha "email address ends with .[a-z]{2,3}". We all know how it went, and ".[a-z]{2,4}" didn't cut it, either, not even in 2000.
To add to the complexity, not all Chinese characters in use for names are representable in unicode. Perhaps at some point legal institutions must just define what the list of characters is that people can have as part of their name as listed on documentation. This reminds me of that 'what programmers believe about names' article from a while back.
Han unification[1] prevents the representation of all Chinese characters. There are multiple languages that use Chinese characters, but they don't all use the same characters. Unicode decided to only use Han Chinese characters, so names using other sorts of Chinese characters can't be written with Unicode. The Han "equivalent" characters can be used, but that looks weird.
Think of it as though Unicode decided that the letter "m" wasn't needed to write English text, since you can just write "rn" and it'll be close enough. Someone named "James" might want to have their name spelled correctly instead of "Jarnes", but that wouldn't be possible. Han unification did essentially this.
I feel it's unlikely that this the explanation for what GGP had in mind. I postulate that names characters usually have no variants, thus do not undergo unification, or where there are variants, they are already encoded as Z variants, so the contention is also moot.
Even then there are plenty of legitimate scripts that are useless to an individual in practice. How am I going to read a name out in a waiting room if I cannot read the script? For some scripts I probably couldn't even perform a string match; certainly not against handwriting.
I think "reasonable" must be limited to the subset that the population of that country can reasonably be expected to be literate against; otherwise transliteration is necessary.
The Artist Formerly Known As Prince has entered the chat.
(the name was a unique symbol)
Yes, all abstractions leak, there will always be edge cases. Doesn't mean "JUST USE ASCII DUH" (the lowercase extension is for the wimps); a whole spectrum exists between these extremes.
Right. Unicode is the current best effort good faith way to include everyone.
It isn't perfect, and there's always a way to subvert good faith if you want to make a point or just be an asshole. The Unicode Consortium is working on the first, and the second can be handled by the majestic indifference of bureaucracy.
reply