No, we would use something similar to S-Expressions [1]. Parsing and generation would be at most a few hundred lines of code in almost any language, easily testable, and relatively extensible.
With the top level encoding solved, we could then go back to arguing about all the specific lower level encodings such as compressed vs uncompressed curve points, etc.
These files are actually cursed and I want all drives that contain their data destroyed with acid. But I have a slight feeling other voting software isn't really any better, even though in theory it should be relatively simple software in the grand scheme of things.
It's been a few years since I've slung code with it, but I'm pretty sure IAR had their own compiler (along with it's own special occasional bugs). Of the IDE's I've used, it wasn't that bad. But QT Creator was better. Bringing together IAR's tech and reach with QT's expertise does make a lot of sense.
It’s useful for someone to be wrong on the Internet.
I’ve learned a lot from watching constructive disagreements between other people. Regardless of whether they’re “right” or not, healthy disagreements sharpen our perspectives.
Starts reading: "fantastic, this is what we've been needing! But... where is code signing?"
> One problem that WAICT doesn’t solve is that of provenance: where did the code the user is running come from, precisely?
> ...
> The folks at the Freedom of Press Foundation (FPF) have built a solution to this, called WEBCAT. ... Users with the WEBCAT plugin can...
A plugin. Sigh.
Fancy, deep transparency logs that track every asset bundle deployed are good. I like logging - this is very cool. But this is not the first thing we need.
The first thing we need, is to be able to host a public signing key somewhere that browsers can get and automatically signature verify the root hash served up in that integrity manifest. Then point a tiny boring transparency log at _that_. That's the thing I really, really care about for non-equivocation. That's the piece that lets me host my site on Cloudflare pages (or Vercel, or Fly.io, or Joe's Quick and Dirty Hosting) that ensures the software being run in my client's browser is the software I signed.
This is the pivotal thing. It needs to live in the browser. We can't leave this to a plugin.
I'll actually argue the opposite. Transparency is _the_ pivotal thing, and code signing needs to be built on top of it (it definitely should be built into the browser, but I'm just arguing the order of operations rn).
TL;DR you'll either re-invent transparency or end up with huge security holes.
Suppose you have code signing and no transparency. Your site has some way of signaling to the browser to check code signatures under a certain pubkey (or OIDC identity if you're using Sigstore). Suppose now that your site is compromised. What is to prevent an attacker from changing the pubkey and re-signing under the new pubkey. Or just removing the pubkey entirely and signaling no code signing at all?
There are a three answers off the top of my head. Lmk if there's one I missed:
1. Websites enroll into a code signing preload list that the browser periodically pulls. Sites in the list are expected to serve valid signatures with respect to the pubkeys in the preload list.
Problem: how do sites unenroll? They can ask to be removed from the preload list. But in the meantime, their site is unusable. So there needs to be a tombstone value recorded somewhere to show that it's been unenrolled. That place it's recorded needs to be publicly auditable, otherwise an attacker will just make a tombstone value and then remove it.
So we've reinvented transparency.
2. User browsers remember which sites have code signing after first access.
Problem: This TOFU method offers no guarantees to first-time users. Also, it has the same unenrollment problem as above, so you'd still have to reinvent transparency.
3. Users visually inspect the public key every time they visit the site to make sure it is the one they expect.
Problem: This is famously a usability issue in e2ee apps like Signal and WhatsApp. Users have a noticeable error rate when comparing just one line of a safety number [1; Table 5]. To make any security claim, you'd have to argue that users would be motivated to do this check and get it right for the safety numbers for every security-sensitive site they access, over a long period of time. This just doesn't seem plausible
I'll actually argue that you're arguing exactly what I'm arguing :)
My comment near the end is that we absolutely need transparency - just that what we need tracked more than all the code ever run under a URL is that one signing key. All your points are right: users aren't going to check it. It needs to be automatic and it needs to be distributed in a way that browsers and site owners can be confident that the code being run is the code the site owner intended to be run.
Gotcha, yeah I agree. Fwiw, with the imagined code signing setup, the pubkey will be committed to in the transparency log, without any extra work. The purpose of the plugin is to give the browser the ability to parse (really fetch, then parse) those extension values into a meaningful policy. Anyways I agree, it'd be best if this part were built into the browser too.
For those in the know about such matters, where is the secret, community audited Rust supply chain?
Let's say I want to start a new project in Rust that needs to touch web services for some reason. The standard answer today is "just use crate <X>." But lets say that I'm security sensitive and spooked by how easy it appears to compromise open source dependencies in 2025.
So I thought, "well, Signal is the gold standard for security and open source - let's see what they do". Libsignal's 'Cargo.lock' has 599 packages in it. Is someone at Signal auditing all of those (and monitoring them for updates)? I see many well established shops using Rust with dependencies - I assume they're vendoring them internally and running them through their own reviews. Is that what everyone does? Or am I just being overly paranoid about the breadth of the dependency chain for what everyone relies on for being one of the most secure messaging clients?
Yeah? It's an eight byte header. The OS needs something to tag IP packets to get them delivered to the correct application. So you're thinking maybe a four byte header for 50% savings here?
Good point on there needing to be some application-level addressing anyway.
On top of that, I believe the UDP checksum can be omitted as well at least on some OSes (and is arguably not necessary for fully encrypted/authenticated payloads) – leaving really just the two bytes for the "length field".
So we have a checksum of the IP header, a checksum of the UDP header and a port number, an application level stream ID or message ID or whatever the application transport protocol is using, and finally almost certainly an even higher level message ID such as a URI. And that’s before you introduce encryption into it with all that overhead. A level 4 protocol providing full integrity verification, encryption, multi homing, multiplexing, out of band control, and control over transmission reliability would be amazing. But the only way you can experiment with these things is if you use UDP and ports. We take the concept of ports for granted but if you think of ICMP or some other L4 protocols that isn’t the only way to identify the sending and receiving application.
If we just allowed all L4 protocol numbers through and ditched NAT we could have nice things. Or we could kick it up two layers to use QUIC for what SCTP could have been.
There's going to be encryption either way in any modern protocol, and then the header manipulation stuff is already all done in hardware. It's probably more efficient in UDP than as a direct IP protocol, because UDP is fast-pathed in ways protocols other than 6 and 17 aren't.
Having a diversity of IP protocols isn't a nice thing. The designers of TCP/IP made a protocol specifically for doing the thing you wanted to see SCTP do: it was called UDP.
Why isn’t it a nice thing? And SCTP and UDP clearly provide different semantics. I am fine with experimenting with new protocols on top of UDP because it is simple to do but ultimately I think things like SCTP and QUIC should run directly on top of IP.
Lua is one of the easiest configuration file formats I've had the pleasure of working with. Readable. Has comments. Variables. Conditionals.
Everyone (including me): "oh no, no, you don't want a full Turing complete language in your configuration file format"
Also Everyone: generating their configuration files with every bespoke templating language dreamed of by gods and men, with other Turing complete languages.
You could solve this with a capabilities permissions system. That way the config files can be written in the same language but have configured permissions that are different from the rest of the programming language. So you could restrict the config files from resources like threads, evaling source, making network requests and whatnot. Come to think of it you could even probably section off parts of the language behind capabilties such that the config files could be configured to be a not-Turing complete subset of the language.
How would PGP help in the long run? If client side scanning is mandated for everything then the natural place for it to wind up is in the OS. Once your OS is scanning all the things, your privacy is finished - pretty good or otherwise.
In fact, proprietary OSes already phone home so often it's just mind blowing. On the mobile camp, only GrapheneOS and niche Linux distributions like SailfishOS are quiet if you inspect network traffic. The tools for client-side scanning are there, it's quite easy to implement total control.
> If client side scanning is mandated for everything then the natural place for it to wind up is in the OS. Once your OS is scanning all the things, your privacy is finished - pretty good or otherwise.
An air gap can solve that problem:
1. Create an illegal message on a machine with no internet.
2. Encrypt the message.
3. Copy the encrypted message over to a machine that does have internet.
In that case you could an Arduino, Raspberry Pi, or similar to write and convert the message. The converted msg can then be sent over USB, wifi, etc to the computer
Right, and then Chat Control looks at the encrypted text and goes "oh huh this looks encrypted and suspicious, let's put this user on a list for closer inspection" or eventually just refuses to let you send the message at all. Steganography is hard and it will be very difficult to hide that you're sending encrypted messages.
But how do we then protect our messages to less tech savvy people? Encryption must be effortless and usable by the masses, or it will be almost pointless.
If Chat Control passes, then encryption will not be effortless and usable by the masses, that's the whole point. Basic encrypted chat will be on the level of Snowden trying to communicate with the journalists back in the days – only possible if both parties are willing to go to lengths.
My read is that Signal now ratchets with ML-KEM in a similar way to iMessages's PQ3, with key delivery being one of the main differentiating features.
Everyone is worried about the fact that ML-KEM keys are so chonky, so PQ3 sends them out only occasionally while Signal chunks them up and sends them in pieces along with all normal messages. Signal's argument is that a huge re-keying message could be detected and blocked, and chunking them is both safer and smoother on bandwidth. Erasure coding will likely wind up costing a bit more overall bandwidth, but each message will be more consistently sized. Given the wide range of Signal's deployment posture, that is probably a wise tradeoff to make. I would expect that Apple has a bit more control over their networks and are in a better position to deal with adversaries attempting to actively block their re-key updates.
With the top level encoding solved, we could then go back to arguing about all the specific lower level encodings such as compressed vs uncompressed curve points, etc.
[1] https://datatracker.ietf.org/doc/rfc9804
reply