There are a ton of interesting use-cases for public city data. When I was an Airbnb host, I built an early alert system to send me email if my address was ever reported or under investigation. The government moves at a snails pace, so anyone who was paying attention would have plenty of time to cure any issues before any formal investigation was even started. I even had a personal dashboard showing how the enforcement office operated, how many investigators they had, which neighborhoods were getting the most enforcement actions, stats on how cases were resolved, how long they took, etc.
I switched from Slack to Discord back in 2017 and I can't imagine ever going back. Their free offering is better than what you get for $$$$ from Slack.
Slack is designed for small groups of people that all know and trust each other. That security model falls apart when you scale to large low-trust organizations. Discord was designed for strangers and offers far more granular controls.
They offer infinite search. Unlimited users. And it's free! Can't recommend it enough.
I've seen the invite-only marketplaces where these exploits are sold. You can buy an exploit to compromise any piece of software or hardware that you can imagine. Many of them go for millions of dollars.
There are known exploits to get root access to every phone or laptop in the world. But researchers won't disclose these to the manufacturers when they can make millions of dollars selling them to governments. Governments won't disclose them because they want to use them to spy on their citizens and foreign adversaries.
The manufacturers prefer to fix these bugs, but aren't usually willing to pay as much as the nation states that are bidding. All they do is drive up the price. Worse, intelligence agencies like the NSA often pressure or incentivize major tech companies to keep zero-days unpatched for exploitation.
It's a really hard problem. There are a bunch of perverse incentives that are putting us all at risk.
Hard problems are usually collective-action problems. This isn't one. It's a tragedy of the commons [1], the commons being our digital security.
The simplest solution is a public body that buys and releases exploits. For a variety of reasons, this is a bad idea.
The less-simple but, in my opinion, better model is an insurance model. Think: FDIC. Large device and software makers have to buy a policy, whose rate is based on number of devices or users in America multiplied by a fixed risk premium. The body is tasked with (a) paying out damages to cybersecurity victims, up to a cap and (b) buying exploits in a cost-sharing model, where the company for whom the exploit is being bought pays a flat co-pay and the fund pays the rest. Importantly, the companies don't decide which exploits get bought--the fund does.
Throw in a border-adjustment tax for foreign devices and software and call it a tariff for MAGA points.
I think what is actually the problem is the software and hardware manufacturers.
Secure use of any device requires a correct specification. These should be available to device buyers and there should be legal requirements for them to be correct and complete.
Furthermore, such specifications should be required also for software-- precisely what it does and legal guarantees that it's correct.
This hasn't ever been more feasible, also considering that we Europeans are basically at war with the Russians, it seems reasonable to secure our devices.
We have already have that: ISO 15408, Common Criteria [1]. Certification is already required and done for various classes of products before they can be purchased by the US government.
However, large commercial IT vendors such as Microsoft and Cisco were unable to achieve the minimum security requirements demanded for high criticality deployments, so the US government had to lower the minimum requirements so their bids could be accepted.
At this point, all vendors just specify and certify that their systems have absolutely no security properties and that is deemed adequate for purchase and deployment.
The problem is not lack of specification, it is that people accept and purchase products that certify and specify they have absolutely zero security.
Yes, but consumers buy, for example, graphics cards with binary blobs and are certainly not sent a specification of the software in them, or of the interfaces, etc. and that is what I believe is the absolute minimum foundation.
So I mean an internal specification of all hardware interfaces and a complete description of software-- no source code, but a complete flow diagram or multi-process equivalent.
> These should be available to device buyers and there should be legal requirements for them to be correct and complete
You're still left with a massive enforcement problem nobody wants to own. Like, "feds sued your kid's avourite toy maker because they didn't file Form 27B/6 correctly" is catnip for a primary challenger.
That's an incredibly tough sell, particularly for software. Who is it that should "require" these specifications, and in what context? Can I still put my scrappy code on Github for anyone to look at? Am I breaking the law by unwittingly leaving in a bug?
Yes, but you wouldn't be able to sell it to a consumer.
They way I imagine it: no sales of this kind of thing to ordinary people, only to sophisticated entities who be expected to deal with the incompletely specified source code, so if a software firm wants to buy it that's fine, but you can't shrink wrap it and sell it to an ordinary person.
Modern software is layers upon layers of open-source packages and libraries written by tens of thousands of unrelated engineers. How do you write a spec for that?
A tragedy of the commons occurs when multiple independent agents exploit a freely available but finite resource until it's completely depleted. Security isn't a resource that's consumed when a given action is performed, and you can never run out of security.
> Security isn't a resource that's consumed when a given action is performed, and you can never run out of security
Security is in general non-excludable (vendors typically patch for everyone, not just the discoverer) and non-rival (me using a patch doesn't prevent you from using the patch): that makes it a public good [1]. Whether it can be depleted is irrelevant. (One can "run out" of security inasmuch as a stack becomes practically useless.)
Yeah, sure. But that doesn't make it a resource. It's an abstract idea that we can have more or less of, not a raw physical quantity that can utilize directly, like space or fuel. And yes, it is relevant that it can't be depleted, because that's what the term "tragedy of the commons" refers to.
> it is relevant that it can't be depleted, because that's what the term "tragedy of the commons" refers to
I think you're using an overly-narrow definition of "tragedy of the commons" here. Often there are gray areas that don't qualify as fully depleting a resource but rather incrementally degrading its quality, and we still treat these as tragedy of the commons problems.
For example, we regulate dumping certain pollutants into our water supply; water pollution is a classic "tragedy of the commons" problem, and in theory you could frame it as a black-and-white problem of "eventually we'll run out of drinkable water", but in practice there's a spectrum of contamination levels and some decision to be made about how much contamination we're willing to put up with.
It seems to me that framing "polluting the security environment" as a similar tragedy of the commons problem holds here, in the sense that any individual actor may stand to gain a lot from e.g. creating and/or hoarding exploits, but in doing so they incrementally degrade the quality of the over-all security ecosystem (in a way that, in isolation, is a net benefit to them), but everyone acting this way pushes the entire ecosystem toward some threshold at which that degradation becomes intolerable to all involved.
> don't know what point you're trying to make with regards to intellectual property
Stocks. Bonds. Money, for that matter. These are all "abstract idea[s] that we can have more or less of, not a raw physical quantity." We can still characterise them as rival and/or excludable.
security maybe considered "commons" but accountables are individual manufacturers. If my car is malfunctioning I'm punished by law enforcement. There are inspections and quality standards. Private entities may provide certifications.
The markets here are complicated and the terms on "million dollar" vulnerabilities are complicated and a lot of intuitive things, like the incentives for actors to "hoard" vulnerabilities, are complicated.
We got Mark Dowd to record an episode with us to talk through a lot of this stuff (he had given a talk whose slides you can find floating around, long before) and I'd recommend it for people who are interested in how grey-market exploit chain acquisition actually works.
Makes me wonder if there are engineers on the inside of some of these manufacturers intentionally hiding 0 days so that they can then go and sell them (or engineers placed there by companies who design 0 days)
People have been worrying about this for 15 years now, but there's not much evidence of it actually happening.
One possible reason: knowing about a vulnerability is a relatively small amount of the work in providing customers with a working exploit chain, and an even smaller amount of the economically valuable labor. When you read about the prices "vulnerabilities" get on the grey market, you're really seeing an all-in price that includes value generated over time. Being an insider with source code access might get you a (diminishing, in 2025) edge on initial vulnerability discovery, but it's not helping you that much on actually building a reliable exploit, and it doesn't help you at all in maintaining that exploit.
good vulnerability / backdoor should be indistinguishable from programming mistake. Indirect call. Missing check on some bytes of encrypted material. Add some validation and you will have good item to sell no one else can find.
Are we just straight up ignoring the Jia Tan xz exploit that happened 10 months ago that would've granted ssh access to the majority of servers running OpenSSH?, or does that not count for the purposes of this question, because that was an open source library rather than a hardware manufacturer?
Classify them as weapons of mass destruction. That's what they are. That's how they should be managed in a legal framework and how you completely remove any incentives around their sale and use.
How about some penalties for their creation? If NSA is discovering or buying, someone else is creating them (even if unintentionally).
Otherwise corporations will be incentivized (even more than they are now) to pay minimal lip service to security - why bother investing beyond a token amount, enough to make PR claims when security inevitably fails - if there is effectively no penalty and secure programming eats into profits? Just shove all risk onto the legal system and government for investigation and clean up.
reminds me of the anthropic claude jailbreak challenge which only pays around $10,000. if you drive the price up, i'm pretty sure you'll get some takers. incentives are not aligned.
It's a classic timing attack. You can detect which Cloudflare datacenter is "closest" (ie. least network latency) to a targeted Signal or Discord user.
Back in 2013 I discovered that you could use clickjacking to trick someone into buying anything you wanted from Amazon (assuming they were signed in). It took them almost a year to fix the issue. They never paid me a bounty.
Bug bounties are kind of a joke. they will invent almost any reason to not pay. it has to be something where the site is malfunctioning, not CSS tricks, which has to do with the browser , not the vendor. Clickjacking can work on any site, not just Amazon.
I don't think you can, but you could open a popup over the target to hide the authorisation page to make it a little less obvious. JS also has a window.close() function for opened windows, but I believe browsers might show a warning when you try that on an external origin.
One could also confuse the user by spawning a whole bunch of tabs for other services after clicking the authorise button, making the user think something weird is going on and closing all the tabs that just popped up without realising they clicked the authorisation button.
The exploit requires pages to load instantly. The first person was saying it usually takes a few hundred ms to load a page (at least). The second person points out that you can load the page in the background so it is in the local browser cache already, in which case loading is near instant.
I understood the first comment as tongue in cheek, because the web has become very slow. It's a legitimate argument, too, but I read it as at least a bit tongue in cheek.
How so? The page with the double-click prompt immediately changes the parent page behind it to the target location, and it can easily show a loading indicator for a couple seconds to wait for the target page to render before prompting the user to double-click.
In other words, they are socializing the costs. Servers and electricity aren't free. Wouldn't it be better for the customer if they had no accident forgiveness and passed those cost savings along? Instead, you are paying for other peoples mistakes plus all the extra overhead caused from fraud that they incentivized.
We have to invest in fighting fraud and abuse anyway, such is the public cloud business. We don't intend to diminish user experience in service of fighting it.