The wild west internet did perform perfectly. There are some problems here and there that could be improved. None of them are addressed by suggestion like this. This is for control and market reach, nothing else. Secure boot was as well. Evil maid problem is at least believable in a corporate context. These suggestions are just fluffy crap.
Really? Spam, scams, seo trash, bots and AIs, are utterly rampant.
I don’t want Google and Microsoft to have the keys to the kingdom, but on the other hand, I really want a way to know that I’m having genuine interactions with real people.
It can (that's why it's being pursued) and that, ironically enough, could even empower decentralized and P2P networks. Hear me out.
If you look at the history of the internet it's basically a story of decentralized protocols with a choice of clients being outcompeted by centralized services with a single client, usually because centralized services can control spam better (+have incentives to innovate etc, it's not just one issue).
The reason spam kills decentralized systems is that all the techniques for fighting it are totally ad-hoc security-through-obscurity tricks combined with large dollops of expensive Big Data and ML processing, all handled by full time teams. It's stuff that's totally out of reach for indy server hosters. Even for the big guys it frequently fails!
Decentralized networks suffer other problems beyond spam due to their reliance on peers being trusted. They're fully open to attack at all times, making it risky and high effort to run nodes. They're open to obscure app-specific DoS attacks. They are riddled with Sybil attacks. They leak private data like sieves. Many features can't be implemented at all. Given all these problems, most users just give up and either outsource hosting or switch to entirely centralized services.
I used to work on the Gmail spam team, and also Bitcoin, so I have direct experience of the problems in both contexts.
Remote attestation (RA) isn't by itself enough to fix these problems, but it's a tool that can solve some of them. Consider that if USENET operators had the ability to reliably identify clients, then USENET would probably have lasted a fair bit longer. Servers wouldn't have needed to make block/allow decisions themselves, they could have simply propagated app identity through the messages. Then you could have killfiled programs as well as people. If SpamBot2000 shows up and starts flooding groups, one command is all it takes to wipe out the spam. Where it gets trickier is if someone releases an NNTP client that has legit users but which can be turned into a spambot, like via scripting features. At that point users would have to make the call themselves, or the client devs would need to find a way to limit how much damage a scripted client can do. So the decision on what is or is not "approved" would be in the hands of the users themselves, in that design.
The above may sound weird, but it's a technique that allows P2P networks with client choice to be competitive against centralised alternatives. And it's worth remembering that for all the talk of the open web and maybe the EU can do this or that, Facebook just did the most successful social network launch in history as a mobile/tablet only app that blocks the EU. A really good reason to not offer a web version is because mobile only services are much easier to defend against spam, again, because mobiles can do RA and browsers cannot. So the web is already losing in this space due to lack of these tools. Denying the web this sort of tech may seem like a short term win but just means that stuff won't be served to browsers at all, and nor will P2P apps that want to be accessible from desktops be able to use it either.
Anyway it's all very theoretical, because at this time Windows doesn't have a workable app-level RA implementation, so it's mobile-only for now anyway (Linux can do it between servers in theory, but not really on the desktop).
No, it can't -- see bellow; there's also no quantitative objective stated or communicated. Hence, it is not controllable, whether it achieved the stated objective or not. What would happen, if it doesn't achieve it? Nothing, because it was not promised clearly enough, just in some vague way.
But it happens to achieve different goal -- for example, even more concentrating the control over general computing into fewer hands.
Would it be rolled back, if it doesn't achieve the stated goal? Of course not; it will achieve the hidden ("it just happened, who could ever know, pinky swear") goal, and that's important. Not the pretend-goals that was used to sell it to the general public.
Now, why it won't achieve the stated goals: because spam is problem also with closed systems. Ever got a junk call? Users use only "approved" devices, and even if the system can put limits on the source, it also limits how the destination can protect itself. The important thing with spam, scams, etc. is, what whenever there is a possibility to make money, the scammers will find a way. Even with low-tech approach (like hire a bunch of human operators of the approved machines). They weren't stopped even when what they did was illegal, why do you think RA achieve what the law didn't? To make things worse, the closed nature made it more difficult for the victims to save evidence of the spam, scam.
So of course it won't reduce the scams. But it will make the situation worse for us all. And web losing to proprietary platforms? It will certainly lose, when it is turned into one of the proprietary platforms.
Spam is much less of a problem with closed systems. BTW phone calls aren't a great example. The global telco system has enough players that it's closer to email than Facebook, and telcos are classed as common carriers so there's a limit to how much spam fighting they can do.
I don't really know what to tell you. This stuff does work extremely well, it's unambiguously the case. Google already use a software-only form of RA on the web and have done for years. It cut through spam like a knife through hot butter. They could already detect 10 years ago if a Python script was pretending to be Chrome, or if Chrome was pretending to be Firefox, or if IE was being driven by VBScripts or an IE WebView was embedded into apps that then manipulated the web page externally. No hardware chips or new web standards needed! But, the approach used is/was in the end just a neat hack, and it's guaranteed that spammers will eventually defeat it. Perhaps they already did. I guess there must be a reason why this proposal surfaces now, given the ideas aren't new.
> To make things worse, the closed nature made it more difficult for the victims to save evidence of the spam, scam.
I don't quite follow the logic here. Why wouldn't they be able to save evidence?
> at this time Windows doesn't have a workable app-level RA implementation
To make this work, I suppose it will finally be necessary for Windows to disallow all user-space code injection (e.g. in-process hook DLLs), including from assistive technologies. I guess this tightened security could be a per-app opt-in feature, at least initially. UI Automation on Windows 11 may finally be ready to take over the work that in-process injected DLLs (particularly from screen readers) previously did without performance regressions, though as far as I know, this hypothesis hasn't really been tested yet (or if it has, that happened inside the Windows accessibility team at Microsoft after I left). The trick will be to give the third-party screen reader developers a strong incentive to prioritize moving away from third-party code injection, without harming end-users in the process (i.e. not suddenly releasing a browser or OS update that breaks web browsing with screen readers).
What other changes or API additions do you think will be necessary to enable workable app-level RA on Windows?
Yes, it's harder for Windows. Desktop operating systems don't have all the details figured out especially around detecting and controlling automation. RA has been around as a concept for decades, and implementations in consoles/phones/servers do pretty much work for a while, but RA that works on general purpose desktop computers is very new and really only Apple has it.
The Windows team would need to at least:
- Get apps using MSIX (package identity)
- Design an API to get an RA for an app that has package identity. Make a proper keychain API (or better) whilst they're at it.
- You don't have to block debuggers or code injection, but if those things occur, that has to leave a trace that shows up in the RA data structure.
- Expose to apps where events come from.
- Compile databases of machine-level PCRs that reflect known good configurations on different boards. Individuals can't do that work, it's too much effort to keep up with all the different manufacturers and Windows versions that are out there. MS would need to offer an attestation service like Apple does.
Some of that stuff is already there because they pushed RA in an enterprise context for a long time. I don't know how widely adopted it is though.
Apple is no general computing platform by far and I believe the flexibility is what makes general computing attractive.
How would you not block debuggers if those aren't verified? This adds insane busy work for little advantages and again would make Microsoft the gatekeeper of hardware.
Macs are general purpose computers. In what way are they not? Is there some task you just can't achieve with them?
There are no problems with debuggers. For one, debugging an app that isn't compiled in debug mode is very hard. If you're at that point something has gone badly wrong somewhere already. For another, there would only be a problem if you're trying to debug a production build of the browser whilst simultaneously accessing a service that wants to measure your environment. That would be an extremely specific scenario that virtually nobody would ever encounter, especially not compared to the much more common scenario of being asked to solve horrible CAPTCHAs.
The force for centralization is that for social networks it simply is the natural topography. People are drawn to where everybody else is, so something being central is a main attractor, even if we disregard the ambitions for reach. Spam is a secondary factor at best.
While Spam is a problem and affects decentralized systems more easily (if they have a critical number of users), the cost of client attestation is just too high.
I am perfectly happy if the web and stays open and a lot of people go into the app space and stay there. I am happy for facebook and don't think I am missing out on the web. I don't use any apps for social media and exclusively use browsers. I wouldn't want a second app space on the web at all because the mobile environment is an ugly abomination of software crap.
If we have a form of RA, it will get worse for users and developers alike. It will be a far worse hassle than killing a bit of spam and we give the wrong players too much power.
Perhaps. Email, IRC, USENET and the phone system are or were all decentralized social networks. They did fine in their heydey.
If you're a 2023-web purist who's willing to just avoid whole services because they're not on your preferred platform, then hw-backed web RA would make no difference to you even if it could be implemented (which IMO it can't): you'd avoid the services that use it just like you already do today.
This is not realistic outlook. If such systems are present, I would see additional hurdles along the way, just we see by Cloudflare if your requests aren't of the usual kind. This does make web discoverability much worse.
It is simply the wrong approach to focus on the negative, in this case spam or in general hostile bots.
> If SpamBot2000 shows up and starts flooding groups, one command is all it takes to wipe out the spam. Where it gets trickier is if someone releases an NNTP client that has legit users but which can be turned into a spambot, like via scripting features. At that point users would have to make the call themselves, or the client devs would need to find a way to limit how much damage a scripted client can do
At which it comes back to not allowing anything but the most locked-down clients, and disempowering users... and still failing, bcecause all clients can be turned into spam bots with the most trivial application of autohotkey et al.
- The OS can trivially expose to the app whether events are coming from real hardware or another app, information the app can then either report or not report.
- The attested user-agent string given can be extended to include information about any scripts that are driving it, e.g. script hashes.
And so on. Then these things can have reputations computed over them. If there's a script hash that shows up reliably in spam, and never shows up in ham, then you can auto-mark those posts as spam. If the scripts aren't known then messages can be throttled until enough users have voted on whether the messages are spam or not. All this is fairly straightforward to code up, again, in a theoretical world in which operating systems expose information like whether events are emulated or not (today they don't).
The trick is that clients don't have to be locked down. The tech is fundamentally about letting you prove true statements. Those statements can be as complex as needed to allow whatever level of customization and control is desired. The more malleable clients are the more complex it becomes to determine what is and isn't considered OK, but in a decentralized system that policy complexity is up to the end users themselves to decide. They can share logic in the same way USENET users used to share killfiles.
Anyway, my point isn't to try and design a full system here. It's research level stuff. Only to point out that this stuff brings spam/abuse control out of BigTech-only world back into the realm of small scripts that can be written and shared by users in a decentralized way.
> If there's a script hash that shows up reliably in spam, and never shows up in ham, then you can auto-mark those posts as spam. [...] All this is fairly straightforward to code up, again, in a theoretical world in which operating systems expose information like whether events are emulated or not (today they don't).
And in a world that has zero outliers or unusual users. In reality, I guarantee my accessibility software would get flagged as emulated input (because it is) and marked as spam.
Then maybe we can also take into account whether the emulated input comes from remotely attested assistive technology. Yes, this will have the effect of at least restricting third-party assistive technology, but we have to keep in mind what's best for the largest number of people (including disabled people who aren't hackers) in the big picture, rather than taking an absolutist stance on hacker freedom.
That makes the tech far more expensive because you introduced useless overhead without gaining anything relevant.
You didn't protect non-tech savvy users at all, on the contrary, you introduced a point of failure for their devices. Some have customized ones which would need to be verified. Doesn't sound like a good idea at all.
Again, it's all chainable. If an app is being controlled by accessibility software, the identity of that software can show up in the RA, so readers can say "it's OK if this app is automated as long as it's by something on this community maintained list of genuine accessibility tools".
Sorry, but don't be silly. Government is part of the problem here. This is the reason why naïve freedom of wild west internet become what it is now is government's actions.
Corporations and govs are actually the same structure. Look to healthcare, pharma, military it is so tight connected. Now IT is just part of the puzzle.
If government were, they would just be acting to further enhance the moats of the largest companies, which finance their campaigns.
At least in the US. I’m not sure how EU politics is actually motivated, though they seem to advance the most useless political solutions to technological problems (browsers not having good defaults for cookies? Let’s make website owners show confusing cookie modals within the website context, that don’t usually even work!)
I live in Turkey and would totally love my government to distribute a national OS and a browser, even if with national TPM keys. Even if I did not trust my government to act in my interest. Because WEI and remote assertion will create absolute dependencies on American companies (who have no incentive to act on my interests anyway), even more than ever. And I don't think this is any good in terms of national security. F16 fighters sold to us which didn't fire on targets USA didn't want us to comes to my mind. Thankfully we were able to be independent from USA in weaponry in the recent years. What is a freedom problem for you is a national independence problem for us as well.