I'm not sure how what you describe is not what a tiling WM offers. Except for the "closing main window" thing but can't you just cancel closing with the history menu?
Imagine an IDE that allows you to see multiple files in an X by Y grid - replace "files" with web pages. Is expecting that feature in a browser that far fetched? Or are we resigned to creating multiple windows + tiling WM as the one-true-solution for this usecase?
You're not describing how a tiling WM fails to satisfy your needs. My guess is you're unhappy with toolbars being repeated in each tile, is that it? If so, I believe Suckless's striped-down browser is the closest, together with a tiling WM.
If code editors are indicative, yes some power users tend to like a tiling window manager bundled in the app. So you may be right but I think it's out of scope for Firefox which targets the masses and it's definitely off-topic here.
> You're not describing how a tiling WM fails to satisfy your needs.
Yes I don't want repeated toolbars because they are waste of space when they are repeated in each window/pane - having to switch browsers isn't the point, the discussion is about what Firefox can do to improve its user's life (even if everyone doesn't expect/need split panes in their browser, some do). Because I don't want to manage 4 windows tiled like 4 panes - I want to manage a single window with 4 panes. What happens if I close one of 4? Now I have a gap that an related or unrelated window will fill or it'll be a wasted space - this doesn't happen in case of split panes because the other panes will expand to take the space. I logically group my work in separate windows - so in a split pane setup, once I'm done, I close a single window instead of closing the separate "pane-like" windows one by one.
I'm very curious as well because my very limited understanding tells me the answer is nothing. The relay hides your identity. Your phone checks the attestations so it won't send your data to servers not running the published software which ensures encryption keys are ephemeral. Once your session is done, the keys are deleted.
Law enforcement would need to seize the right server among millions while it's processing your request and perform an attack on it to get the keys before they're gone.
My next question is what happens if/when the attestation keys are stolen.
The goal is money and control. They enact a partial "solution" so later on they'll say they need more access to private data because of the loopholes. Meanwhile the industry of surveillance and compliance grows providing more paper pushing jobs to the establishment. Power-tripping politicians also get the ability to spy on opponents and basically anyone they please.
The same argument of diminishing returns, that quality of life doesn't improve much when going from 130k€ to 1M€ can be applied to capital controls. Is this 10k€ limit really what is needed to save the welfare state? Were the previous controls not enough?
Or is it that the welfare state is collapsing on its own and grasping at straws?
> Or is it that the welfare state is collapsing on its own and grasping at straws?
I think you just reached the conclusion? These things never worked (look at history), but usually are implemented by the incompetent (which got us here in the first place).
>> No, because PBKDFs are not a good mechanism for creating encryption keys
> I'm curious about what you mean by this. Isn't it in part what PBKDFs are designed for?
Password-based key derivation functions start with the assumption that some entropy is provided by the user. Which means that the entropy is typically of awful quality. A PBKDF does the best it can with that low entropy, which is to make it into a time- and maybe space-expensive brute-forcing problem. But a PBKDF is starting with one hand tied behind its back if the user-supplied entropy is "password" or "hunter2." If we aren't burdened by that assumption, then we can generate high-quality entropy -- like 128 or 256 bits of CSRNG-generated noise -- and merely associate it with the user, rather than basing it on the user's human-scale memory.
PBKDFs also generally assume that users are transmitting their plaintext passphrases to the server, e.g., when you HTTP POST your credentials to server.com. Of course, browsers and apps use transport security so that MITMs can't grab the passphrase over the wire, but the server actually does receive the phrase "hunter2" at some point if that's your passphrase. So again, it's a rotten assumption -- basically the foundation of most password-database compromises on the internet -- and PBKDF does the best it can.
If you remove that assumption and design a true asymmetric-encryption-based authentication system, then you don't need the obfuscation rounds of a PBKDF because the asymmetric-encryption algorithm is already resistant to brute-forcing. The script kiddie who steals /etc/passwd from a server would effectively obtain a list of public keys rather than salted hashes, and if they can generate private keys from public keys, then they are already very wealthy because they broke TLS and most Bitcoin wallets.
Think of passkeys as a very user-friendly client-side certificate infrastructure. You wouldn't let your sysadmin base your enterprise website's TLS certificates on a private key derived from their dog's birthday. You wouldn't let users do that for their certs, either.
sowbug has a more detailed answer, but the TLDR is the PBKDFs were consider ok a long time ago before the security implications were really understood. Essentially they're low entropy in practice (e.g. a person _could_ make a 10+ word password, but they're not going to for a password they have to enter frequently).
You're much better off using the password to a truly random key, though that of course immediately raises the question of how you store the true key :D Modern systems use some kind of HSM to protect the true keys, and if they're super integrated like the SEP (or possibly the SE, I can never recall which is which) on apple hardware they can simply never expose the actual keys and only provide handles and have the HSM encrypt and decrypt data directly before it's seen by the AP.
Websites already have a hard time to get users to sign up, so requiring them to enroll backup authenticators (which they won't have) is not going to work. Printing or writing down backup codes is even worse from a UX point of view.
IIRC the spec has a flag to hint that the passkey is backed up (in iCloud or your Google account) so the relying party (website) knows whether backups are mandatory but that means the secret doesn't stay on your device and goes to the mothership. Then I don't see why the spec wouldn't standardize the transfer of secrets from one company to the other.
Chrome allows you to export saved password in CSV (chrome://settings/passwords) So I'd say it is a regression in this regard. You won't be able to switch to an other browser easily, you'll have to go to each websites and change/add authentication methods as far as I know.