I just want more trustworthy systems. This particular concept of combining reproducible builds, remote attestation and transparency logs is something I came up with in 2018. My colleagues and I started working on it, took a detour into hardware (tillitis.se) and kind of got stuck on the transparency part (sigsum.org, transparency.dev, witness-network.org).
Then we discovered snapshot.debian.org wasn't feeling well, so that was another (important) detour.
Part of me wish we had focused more on getting System Transparency in its entirety in production at Mullvad. On the other hand I certainly don't regret us creating Tillitis TKey, Sigsum, taking care of Debian Snapshot service, and several other things.
Now, six years later, systemd and other projects have gotten a long way to building several of the things we need for ST. It doesn't make sense to do double work, so I want to seize the moment and make sure we coordinate.
It sounds like you want to achieve system transparency, but I don't see any clear mention of reproducible builds or transparency logs anywhere.
I have followed systemd's efforts into Secure Boot and TPM use with great interest. It has become increasingly clear that you are heading in a very similar direction to these projects:
- Hal Finney's transparent server
- Keylime
- System Transparency
- Project Oak
- Apple Private Cloud Compute
- Moxie's Confer.to
I still remember Jason introducing me to Lennart at FOSDEM in 2020, and we had a short conversation about System Transparency.
I'd love to meet up at FOSDEM. Email me at fredrik@mullvad.net.
Edit: Here we are six years later, and I'm pretty sure we'll eventually replace a lot of things we built with things that the systemd community has now built. On a related note, I think you should consider using Sigsum as your transparency log. :)
Edit2: For anyone interested, here's a recent lightning talk I did that explains the concept that all project above are striving towards, and likely Amutable as well: https://www.youtube.com/watch?v=Lo0gxBWwwQE
Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.
Edit: I've reached out privately by email for next steps, as you requested.
Hi David. Great! I actually wasn't planning on going due to other things, but this is worth re-arranging my schedule a bit. See you later this week. Please email me your contact details.
As I mentioned above, we've followed systemd's development in recent years with great interest, as well as that of some other projects. When I started(*) the System Transparency project it was very much a research project.
Today, almost seven years later, I think there's a great opportunity for us to reduce our maintenance burden by re-architecting on top of systemd, and some other things. That way we can focus on other things. There's still a lot of work to do on standardizing transparency building blocks, the witness ecosystem(**), and building an authentication mechanism for system transparency that weaves it all together.
I'm more than happy to share my notes with you. Best case you build exactly what we want. Then we don't have to do it. :)
I'm super far from an expert on this, but it NEEDS reproducible builds, right? You need to start from a known good, trusted state - otherwise you cannot trust any new system states. You also need it for updates.
Well, it comes down to what trust assumptions you're OK with. Reproducible reduces trust in the build environment, but you still need to ensure authenticity of the source somehow. Verified boot, measured boot, repro builds, local/remote attestation, and transparency logging provide different things. Combined they form the possibility of a sort of authentication mechanism between a server and client. However, all of the concepts are useful by themselves.
Obviously it’s far more nuanced than that. I’d say there are several categories where a reasonable person could have reservations (or not) about LLMs:
Copyright issues (related to training data and inference), openness (OSS, model parameters, training data), sovereignty (geopolitically, individually), privacy, deskilling, manipulation (with or without human intent), AGI doom. I have a list but not in front of me right now.
Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't, yet look at most of the replies here.
> Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't...
Did you read Mr. Bushell's policy [0], which is linked to by TFA? Here's a very relevant pair of sentences from the document:
Whilst I abstain from AI usage, I will continue to work with clients and colleagues who choose to use AI themselves. Where necessary I will integrate AI output given by others on the agreement that I am not held accountable for the combined work.
And from the "Ensloppification" article [1], also linked by TFA:
I’d say [Declan] Chidlow verges towards AI apologism in places but overall writes a rational piece. [2] My key takeaway is to avoid hostility towards individuals†. I don’t believe I’ve ever crossed that line, except the time I attacked you [3] for ruining the web.
† I reserve the right to “punch up” and call individuals like Sam Altman a grifter in clown’s garb.
Based on this information, it doesn't seem that Mr. Bushell will hate anyone for using "AI" tools... unless they're CEO pushers.
Or are you talking in generalities? If you are, then I find the unending stream of hype articles from folks using this quarter's hottest tool to be extremely disinteresting. It's important for folks who object to the LLM hype train to publish and publicize articles as a counterpoint to the prevailing discussion.
As an aside, the LLM hype reminds me of the hype for Kubernetes (which I was personally enmeshed in for a great many years), as well as the Metaverse and various varieties of Blockchain hype (which I was merely a bystander for).
That's a very thorough takedown of something the guy you're replying to never said. The end of their comment was "yet look at most of the replies here".
> That's a very thorough takedown of something the guy you're replying to never said.
Nah. Consider the context:
Aren't we all tired by this anti-AI stuff?
"Look at how I use this cool new technology" tends to be much more interesting to me than "this new technology has changed my job and I refuse to use it because I'm afraid".
[Copyright concerns, openness, sovereignty, privacy, deskilling, manipulation and AGI doom] are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't, yet look at most of the replies here.
This trail of complaints was about the article from which I quoted.
Given the opinion on anti-"AI" articles suggested by that trail of complaints, I'd wager he either didn't read, or didn't thoroughly read the article and the supporting materials it links to. That's totally fine; but do folks the courtesy of going back and reading more carefully (or at all) when someone indicates that one's understanding of the material is substantially incorrect.
> An account-number model like Mullvad's would seem preferable
Thank you! :)
> .. assuming vendor’s TEE actually works
For sure TEEs have a rich history of vulnerabilities and nuanced limitations in their threat models. As a concept however, it is really powerful, and implementers will likely get things more and more right.
As for GPUs, some of Nvidia’s hardware does support remote attestation.
He IS a hacker from the 90s. It’s an assumed name. Plenty of hackers from the 90s have pseudonyms.
> so-called creator of some encryption protocol
All evidence points to him being one of the protocol’s designers, along with Trevor Perrin.
I’ve met both of them. The first time I met Moxie and talked about axolotl (as it was called back then) was in 2014. Moxie and Trevor strike me as having more integrity and conviction than most. There is no doubt in my mind that they are real and genuine.
Interestingly enough, some of the work Trevor did related to Signal’s cryptography was later used by Jason Donenfeld in the design of WireGuard.
> It screams honeypot like nothing else.
As you can see there is plenty of evidence suggesting otherwise.
It’s exciting to hear that Moxie and colleagues are working on something like this. They definitely have the skills to pull it off.
Few in this world have done as much for privacy as the people who built Signal. Yes, it’s not perfect, but building security systems with good UX is hard. There are all sorts of tradeoffs and sacrifices one needs to make.
For those interested in the underlying technology, they’re basically combining reproducible builds, remote attestation, and transparency logs. They’re doing the same thing that Apple Private Cloud Compute is doing, and a few others. I call it system transparency, or runtime transparency. Here’s a lighting talk I did last year: https://youtu.be/Lo0gxBWwwQE
I don't know, I'd say Signal is perfect, as it maximizes "privacy times spread". A solution that's more private wouldn't be as widespread, and thus wouldn't benefit as many people.
Signal's achievement is that it's very private while being extremely usable (it just works). Under that lens, I don't think it could be improved much.
>Signal's achievement is that it's very private while being extremely usable (it just works).
Exactly. Plus it basically pioneered the multi-device E2EE. E.g., Telegram claimed defaulting to E2EE would kill multi-client support:
"Unlike WhatsApp, we can allow our users to access their Telegram message history from several devices at once thanks to our built-in instant cloud sync"
> I think the right course of action should be a political activism, not a technological one. Especially when the company doing it makes a fortune.
We tried that. My cofounder and I, as well as several of our colleagues, tried classic political activism in the early 2000s. It became increasingly clear to us that there are many powerful politicians, bureaucrats and special interest groups that don't act in good faith. They lie, abuse their positions, misuse state funds and generally don't care what the population or civil society thinks. They have an agenda, and don't know the meaning of intellectual honesty.
> The course, when one can just disengage from participating in society by sidestepping the problems by either using VPNs in terms of censorship .. is very dangerous and will reinforce the worst trends.
It sounds like you're arguing for censored populations to respect local law, not circumvent censorship through technological means, and only work to remove censorship through political means.
Generally, the more a state engages in online censorship the less it cares about what its population thinks. There are plenty of jurisdictions where political activism will get you jailed, or worse.
Are you seriously suggesting that circumventing state censorship is immoral and wrong?
> So instead of speaking from the high ground, please, tell us what your solution about mass disinformation happening from US social media megacorps, Russia mass disinformation, mass recruitment of people for sabotage on critical infrastructure.
Social media companies make money by keeping people engaged, and it seems the most effective way of doing that is to feed people fear and rage bait. Yes, that's a problem. As is disinformation campaigns by authoritarian states.
Powerful companies have powerful lobbyists, and systematically strive for regulatory capture. Authoritarian states who conduct disinformation campaigns against their population are unlikely to listen to reform proposals from their population.
I don't claim to have a solution for these complex issues, but I'm pretty sure mass surveillance and censorship will make things worse.
> Tell us, how can we keep living in free society when this freedom is being used as a leverage by forces trying to destroy your union.
Political reform through civil discourse cannot be taken for granted. Mass surveillance and censorship violate the principle of proportionality, and do not belong in a free society.
> Please, give us your political solutions to the modern problems instead of earning a fortune by a performance free speech activism.
I'm not sure what you mean by performance. Please clarify.
> My cofounder and I, as well as several of our colleagues, tried classic political activism in the early 2000s. It became increasingly clear to us that there are many powerful politicians, bureaucrats and special interest groups that don't act in good faith. They lie, abuse their positions, misuse state funds and generally don't care what the population or civil society thinks. They have an agenda, and don't know the meaning of intellectual honesty.
I understand that.
You created a company which allows people to regain freedoms limited by their governments.
My only problem is that it ultimately undermines the government power and makes it weaker.
By creating a technical solutions to subvert government function, you are basically moved into a business of bypassing government regulations for people with money. Obviously when the market becomes large enough, governments can no longer ignore it.
The problem is that it creates reinforcement loops in such ways that political change becomes more difficult.
For example, we may imagine that Russia and China target people through social media. I believe that the effectiveness of this influence cannot be overstated, so naturally some governments may start thinking about limiting it by enforcing bans on some social media platforms or create laws to force them to be more transparent. You may not agree with this personally, and believe in the freedom of choice, but you are still in a business of exposing people to enemy propaganda against their democratically elected governments.
> It sounds like you're arguing for censored populations to respect local law, not circumvent censorship through technological means, and only work to remove censorship through political means.
Yes, in democratic countries I believe population should feel the pressure and resolve it through the process of electing the politicians representing their values, not buying workarounds from the vendor.
I believe that the exact same ads you have on the streets in the cities should be published by politicians or NGOs and not a business.
> Generally, the more a state engages in online censorship the less it cares about what its population thinks. There are plenty of jurisdictions where political activism will get you jailed, or worse.
I agree with that. To be honest, I do care about the EU mostly and I do think that political activism is still possible even when there is additional risk.
> Are you seriously suggesting that circumventing state censorship is immoral and wrong?
There is a very fine line, and I don't know the answer. I do belive that people should have a right for a private communication. I also do not trust law enforcement agencies and people there.
On the other hand, I do know that vulnerable people (teens, minorities, sick, elderly) in my country get recruited by Russia en masses through messengers. I do know that Russia engages in psychological warfare through Telegram, Facebook and TikTok without governments able to do anything. I do see the politicians in the western countries aligns with the psychological warfare of enemies because it helps them to get in power.
I do want for politicians to fight for my rights, but I don't want that from businesses to be honest.
> I'm not sure what you mean by performance. Please clarify.
I mean, activism is clearly a part of your business strategy. The more discussion you create around issues related to privacy and censorship the more users you'll have - that's why I call it performative. Mullvad's business depends on the performance of fighting for the rights at the same time as benefitting from the fight itself.
I do feel that there is a big disconnect between finding a technical solution and finding a political solution, and I feel like the tech sector becoming more and more influential and I also believe this will not end well.
> Thank you for the reply, I really appreciate it.
Likewise.
> You created a company which .. ultimately undermines the government power and makes it weaker.
Undermining the power of governments and other powerful entities has benefits and drawbacks. Our thesis is that making mass surveillance and online censorship ineffective is a net good for humanity in the long term.
You are arguing that censorship is a net good in the much more specific context of disinformation campaigns on social media during war time. Yes, government censorship might be effective and proportional in that context. It could also backfire.
You are also arguing that the dynamics and algorithms of social media is the vector through which disinformation spreads. Wouldn't it then be more effective and proportional to target social media for regulation?
>> It sounds like you're arguing for censored populations to .. not circumvent censorship through technological means..
> Yes, in democratic countries..
What should people in undemocratic countries do?
> I believe that the exact same ads you have on the streets in the cities should be published by politicians or NGOs and not a business.
> .. I do think that political activism is still possible even when there is additional risk.
> On the other hand, I do know that vulnerable people (teens, minorities, sick, elderly) in my country get recruited by Russia en masses through messengers. I do know that Russia engages in psychological warfare through Telegram, Facebook and TikTok without governments able to do anything.
I agree that is a serious problem and I don't know how to solve it. I'm sorry.
> I do want for politicians to fight for my rights, but I don't want that from businesses to be honest.
Why not?
> I mean, activism is clearly a part of your business strategy.
From a cause-and-effect point of view it would be more correct to say that starting a business is a part of our activism strategy. My opinions on the proportionality of mass surveillance and government censorship were formed a decade before I started Mullvad. Running a business is hard work, and if I didn't believe in its mission I would move on to something easier.
> The more discussion you create around issues related to privacy and censorship the more users you'll have - that's why I call it performative. Mullvad's business depends on the performance of fighting for the rights at the same time as benefitting from the fight itself.
I see. I interpreted it as "for show" in the sense of not being genuine.
2. Are you looking for pilot customers?