I don't think you know anything about how these industries work and should probably read some of the published books about them, like "This Is How They Tell Me The World Ends", instead of speculating in a way that will mislead people. Most purchasers of browser exploits are nation-state groups ("gray market") who are heavily incentivized not to screw the seller and would just wire some money directly, not black market sales.
I mean, you're still restricted to selling it to your own government, otherwise getting wired a cool $250k directly would raise a few red flags I think. And how many security researchers have a contact in some government-sponsored hacking company anyway? Do you really think that convincing them to buy a supposed zero-day exploit as a one-off would be easy?
Say you're in the US. I'm sure there are some CIA teams or whatever making use of Chromium exploits "off the record", but for any official business the government would just put pressure on Google directly to get what they want. So any project making use of your zero-day would be so secret that it'd be virtually impossible for you to even get in contact with anybody interested to buy it. Sure they might not try to "screw you", but it's sort of like going to the CIA and saying, "Hey would you be interested in buying this cache of illegal guns? Perhaps you could use it to arm Cuban rebels". What do you think they would respond to that?
Defence firms like Raytheon are often happy to pay for stuff like this. What happens afterwards with the exploit is anybody's guess. Source - a vague memory of a Darknet diaries episode.
Eh, not really? If it's a legit company who provides services to various governments, they're going to pay you, they're going to report the income to the government, you'll get a 1099 for contract/consulting, and you'll pay your taxes on the legit income. No red flags. Assuming they're legit and not currently sanctioned by the US government that is.
Browser exploits are almost always two steps: you exploit a renderer bug in order to get arbitrary code execution inside a sandboxed process, and then you use a second sandbox escape exploit in order to gain arbitrary code execution in the non-sandboxed broker process. The first line of that (almost definitely AI generated) summary is the bad part, and means that this is one half of a full browser compromise chain. The fact that you still need a sandbox escape doesn't mean that it is harmless, especially since if it's being exploited in the wild that means whoever is using it probably does also have a sandbox escape they are pairing with it.
It's really funny reading the thought processes, where most of the time the agent doesn't actually remember trivial things about the cards they or their opponent are playing (thinking they have different mana costs, have different effects, mix up their effect with another card). The fact they're able to take game actions and win against other agants is cute, but it doesn't inspire much confidence.
The agents also constantly seem to evaluate if they're "behind" or "ahead" based on board state, which is a weird way of thinking about most games and often hard to evalaute, especially for decks like control which card more about resources like mana and card advantage, and always plan on stabalizing late game.
You might be looking at really old games (meaning, like, Saturday) - I've made a lot of harness improvements recently which should make the "what does this card do?" hallucinations less common. But yeah, it still happens, especially with cheaper models - it's hard to balance "shoving everything they need into the context" against "avoid paying a billion dollars per game or overwhelming their short-term memory". I think the real solution here will be to expose more powerful MCP tools and encourage them to use the tools heavily, but most current models have problems with large MCP toolsets so I'm leaving that as a TODO for now until solutions like Anthropic's https://www.anthropic.com/engineering/code-execution-with-mc... become widespread.
Yeah, this is basically Sovereign Citizen-tier argumentation: through some magic of definitions and historical readings and arguing about commas, I prove that actually everyone is incorrect. That's not how programming languages work! If everyone for 10+ years has been developing compilers with some definition of undefined behavior, and all modern compilers use undefined behavior in order to drive optimization passes which depend on those invariants, there is no possible way to argue that they're wrong and you know the One True C Programming Language interpretation instead.
Moreover, compiler authors don't just go out maliciously trying to ruin programs through finding more and more torturous undefined behavior for fun: the vast majority of undefined behavior in C are things that if a compiler wasn't able to assume were upheld by the programmer would inhibit trivial optimizations that the programmer also expects the compiler to be able to do.
I find where the argument gets lost is when undefined behavior is assumed to be exactly that, an invariant.
That is to say, I find "could not happen" the most bizarre reading to make when optimizing around undefined behavior "whatever the machine does" makes sense, as does "we don't know". But "could not happen???" if it could not happen the spec would have said "could not happen" instead the spec does not know what will happen and so punts on the outcome, knowing full well that it will happen all the time.
The problem is that there is no optimization to make around "whatever the hardware does" or "we have no clue" so the incentive is to choose the worst possible reading "undefined behavior is incorrect code and therefore a correct program will never have it".
Some behaviors are left unspecified instead of undefined, which allows each implementation to choose whatever behavior is convenient, such as, as you put it, whatever the hardware does. IIRC this is the case in C for modulo with both negative operands.
I would imagine that the standard writers choose one or the other depending on whether the behavior is useful for optimizations. There's also the matter that if a behavior is currently undefined, it's easy to later on make it unspecified or specified, while if a behavior is unspecified it's more difficult to make it undefined, because you don't know how much code is depending on that behavior.
I think this is not really true. Or rather, it depends on the UB you are talking about. There is UB which is simply UB because it is out-of-scope for the C standard, and there is UB such as signed integer overflow that can cause issues. It is realistic to deal with the later, e.g. by converting them to traps with a compiler flags.
> I think this is not really true. Or rather, it depends on the UB you are talking about.
I mean, if you're going to argue that a compiler can do anything with any UB, then by all means make that argument.
Otherwise, then no, I don't think it's reasonable for a compiler to cause an infinite loop inside a function simply because that function itself doesn't return a value.
When you say "cause", do you mean insert on purpose, or do you mean cause by accident? I could see the latter happening, for example because the compiler doesn't generate a ret if the non-void function doesn't return anything, so control flow falls through to whatever code happens to be next in memory. I'm not aware of any compiler that does that, but it's something I could see happening, and the developers would have no reason to "fix" it, because it's perfectly up to spec.
The problem was that the loop itself was altered, rather than that the function returned and then that somehow caused an infinite loop.
> I'm not aware of any compiler that does that, but it's something I could see happening, and the developers would have no reason to "fix" it, because it's perfectly up to spec.
I am not sure what statement you are responding to. I am certainly not arguing that. I disagree with your claim that "it is practically impossible find a program without UB".
A study found that, for a particular subset of UB (code that had legal, detectable behavior changes at differing optimization levels), 40% of Debian Wheezy packages exhibited this UB.
I know, but this still leaves 60% of programs without such UB which is far from "it is practically impossible find a program without UB". Also this this was a study from 2013 and many of those bugs found were fixed. Also GCC got UBSan in 2013 (so after this study).
That's "UB that was detected in this study". Since gcc will silently break code when it detects UB and you can't tell until you hit that specific case, the 40% is a lower bound. In practice it could be anything up to the full 100%.
Uhh... mathematics and logic? Since there's no perfect UB detector, one that detects UB in 40% of programs can only be presenting a lower bound. And I don't know why you think C programs rely on UB, they have it present without the programmer knowing about it.
Aliasing being the classic example. If code generation for every pointer dereference has to assume that it’s potentially aliasing any other value in scope, things get slow in a hurry.
Or better yet, the built-in Version Tracker, which is designed for porting markup to newer versions of binaries with several different heuristic tools for correlating functions that are the same due to e.g. the same data or function xrefs, and not purely off of identical function hashes...
Going off of only FunctionID will either have a lot of false positives or false negatives, depending on if you compute them masking out operands or not. If you mask out operands, then it says that "*param_1 = 4" and "*param_1 = 123" are the same hash. If you don't mask out operands, then it says that nearly all functions are different because your call displacements have shifted due to different code layout. That's why the built-in Version Tracker tool uses hashes for only one of the heuristics, and has other correlation heuristics to apply as well in addition.
OpenSSL is used by approximately everything under the sun. Some of those users will be vendors that use default compiler flags without stack cookies. A lot of IoT devices for example still don't have stack cookies for any of their software.
2026 and we still have bugs from copying unbounded user input into fixed size stack buffers in security critical code. Oh well, maybe we'll fix it in the next 30 years instead.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
The actual vulnerability is indeed the copy. What we used to do is this:
1. Find out how big this data is, we tell the ASN.1 code how big it's allowed to be, but since we're not storing it anywhere those tests don't matter
2. Check we found at least some data, zero isn't OK, failure isn't OK, but too big is fine
3. Copy the too big data onto a local buffer
The API design is typical of C and has the effect of encouraging this mistake
int ossl_asn1_type_get_octetstring_int(const ASN1_TYPE *a, long *num, unsigned char *data, int max_len)
That "int" we're returning is either -1 or the claimed length of the ASN.1 data without regard to how long that is or whether it makes sense.
This encourages people to either forget the return value entirely (it's just some integer, who cares, in the happy path this works) or check it for -1 which indicates some fatal ASN.1 layer problem, give up, but ignore other values.
If the thing you got back from your function was a Result type you'd know that this wasn't OK, because it isn't OK. But the "Eh, everything is an integer" model popular in C discourages such sensible choices because they were harder to implement decades ago.
Win32 API at some point started using the convention of having the buffer length be a reference. If the buffer is too small the API function updates the reference with the required buffer length and returns an error code.
I quite like that, within the confines of C. I prefer the caller be responsible for allocations, and this makes it harder to mess up.
I get their point that you can't provide a "No" in the reminder. But there should be an option (maybe even hidden under "advanced settings - here be dragons!") for this.
Problem is (and that was their argument) people press this button all the time without reading the dialogue at all, and then won't know how to turn it back on. A messenger app has to deal with very technical illiterate people. But there should be an option in settings for the tech savvy user.
Signal is an interesting case study in UX failure. I and a bunch of other tech forward people were on it in its heyday but after they removed SMS support and implemented shitty UX like that nag dialog: Neither I nor a single person I know uses it any more. Everyone is on Whatsapp or iMessage.
It may be cryptographically superior, but does that matter at the end of the day if nobody uses it?
Cryptographical superiority aside, Signal doesn't collect personal data, unlike Whatsapp. For me that's the main reason to use it. The UX is good enough, although some points can for sure be improved.
Whatsapp should be a non starter. What Mark Zuckerberg did to Whatsapp should be required reading for anyone using the internet, and then decide if you still want to use Facebook (never mind, they build a shadow profile for you anyway)
A few of my neighbors have kids the same age as my kids, they're on a WhatsApp group chat, and my choice is either use WhatsApp or make my kid miss out on social events, so it's not really a choice.
"Hey let's switch to this app that nobody else is using and it sends you annoying popups every month but trust me bro it's more secure" is not a winning argument
Every so often I consider writing the "STFU license." Something like GPL but if you use this code, even as a library, you can't give people unwanted notifications. Would need to be pretty comprehensive and forward compatible to cover all the crazy cases that notification-enthusiasts dream up.
This. We must change laws that the above field is not considered as given consent. And while we are at it, we must change "silence is agreement" to "silence is disagreement". This applies to change of ToS, price increases etc. That means if I don't click a link with a button "I agree", the ToS change is not accepted - that means they have to cancel/delete my account.
Didn't FCC remove "1-click unsubscribe" requirement since it can "provide more choice and lower prices to all users across the board" (since the companies can rip off more users and create pseudo-lower prices)?
EU has its GPDR and it has some teeth, but US is currently hopeless on that front, for now, from my vantage point.
The FTC established a "click-to-cancel" rule, but (as with just so many regulations in the US) it was blocked by an appeals court. Federal law says there's a hoop they have to jump through for rules with an impact of more than $100 million, and they didn't jump through the hoop because they didn't think the impact was that high.
I like to frame it like this: "ask me later" is rape culture. It promotes and reinforces a culture of never taking "no" for an answer, and pushing one's agenda/intent regardless of the preference/consent of the other party/parties.
I see the point you're making but this sort of hyperbole has a tendency to turn people away from whatever point you're trying to make unless they already agree with you.
I was visiting a girlfriend once, and she was in the process of moving in the same city. There was a telephone bill on top of her dresser, and I noticed that she had noted "butt-rape fee" next to one of the line items there.
Now she is a very literate woman and loves poetry and "Penny Dreadfuls", so she uses language and words very deliberately. And so, I asked her why she wrote that, and she said it was some sort of unnecessary fee that they were charging to move her line from one address to another, and she clearly resented their opportunistic capitalism.
I certainly sympathized with her, especially since she is the type of woman who has probably been subjected to that sort of actual trauma in her own life, and that of her friends, she had every right to compare the experiences.
If a single engineer can sabotage a project, then the company has bigger things to worry about.
There should be backups, or you know, GitHub with branch protection.
Aside from that, perverse incentives are a real problem with these systems, but not an insurmountable one.
Everyone on the project should be long on the project, if they don't think it will work, why are they working on it?
At the very least, people working on the project should have to disclose their position on the project, and the project lead can decide whether they are invested enough to work on it.
Part of the compensation for working on the project could be long bets paid for by the company, you know like how equity options work, except these are way more likely to pay out.
If no one wants to work on a project, the company can adjust the price of the market by betting themselves.
Eventually it will be a deal that someone wants to take.
And if it's not, then why is the project happening? clearly everyone is willing to stake money that it will fail.