LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation.
It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there?
To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.
I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.
I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.
That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.
But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.
It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).
So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)
I usually work in branches in a private repo, squash and merge features / fixes in the private repo, and only merge the clean, verified, extensively tested merges back to public.
You don't need to see every single commit and the exact chronology of my work, snapshots is enough :)
Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.
This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.
Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)
And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?
This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.
This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
> Every line was thoroughly reviewed and cross-referenced with relevant RFCs
The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec.
When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied.
I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s
DevSecOps Engineer
United States Army Special Operations Command · Full-time
Jun 2022 - Jul 2025 · 3 yrs 2 mos
Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.
Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.
This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.
Considering how many times I've heard "don't let perfection be the enemy of good enough" when the code I have is not only incomplete but doesn't even do most of the things asked (yet), I'd wager quite a lot
I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes.
>Europe is the only continent in the world to have a large public network of supercomputers that are managed by the EuroHPC Joint Undertaking (EuroHPC JU).
Who would have thought that Europe is the only continent to have a network of supercomputers managed by Europe⸮
I can think of multiple ways to pass the message to Electron developers:
- Open a GitHub issue explaining those private APIs shouldn't be used.
- Even better, open a PR fixing their use.
- Make those API calls a no-op if they come from an Electron app.
- Fix those API calls not to grind the OS to a halt for a seemingly simple visual effect.
- Create a public API allowing the same visual effect on a tested and documented API.
Choosing to (apparently violently) downgrade the user experience of all Electron app users, without a possibility to update at the launch day, if a deliberate decision and not an overlooked bug, is a rather shitty and user-hostile move, don't you think?
> - Make those API calls a no-op if they come from an Electron app.
Long-term, this is a maintenance nightmare. These hacks can stick around for decades, because there's no backpressure on downstream to actually fix things. It's not about "team velocity", it's about keeping yourself sane.
> - Open a GitHub issue explaining those private APIs shouldn't be used.
> - Even better, open a PR fixing their use.
Apple has a history/culture of secrecy. Whenever they provide public source code, it's a dump thrown over the fence. There is most likely some team inside that actually cares, but they can't "just" open an issue. My guess is that this is their message.
> [...] is a rather shitty and user-hostile move, don't you think?
Yes, I agree, the general direction they've been taking has been increasingly user-hostile for a very long time; let alone the developer story.
But sometimes there's also a perfectly reasonable excuse, from both "technical" and "organizational" POV. That's just my take, a skunkworks effort to get Electron to fix their crap. I would do the same.
To be clear, Electron themselves fixed the bug quite quickly; but many Electron apps haven't pushed a version that vendors in the fixed version of the Electron runtime.
(And shit like this is exactly why runtimes like the JVM or the .NET CLR are designed to install separately from any particular software that uses them. Each of their minor [client-facing-ABI compatible] versions can then be independently updated to their latest OS-facing-bugfix version without waiting for the software itself to ship that update.)
Apple is consistent in their warnings to not use private APIs, and especially don't override them for custom implementations which is what Electron does here.
The _cornerMask override was a hack that shouldn't ever have existed in the first place, and it's not the only use of private APIs in the electron code base.
Apple is very clear about how they want you to make software for their OSes. It's 100% on electron that they choose to do it this way regardless.
I'd go as far as to say Electron itself is a hack that shouldn't exist, but sadly everyone has decided it's the only way they are going to make desktop software now.
> How else do you get the message across? Do not use the private APIs.
The most effective way would be for Apple to actually seek feedback on requirements and then actually implement public APIs for functionality that people need.
That's confusing "consensus building" with "effective". Killing a private api is pretty effective. And consensus building doesn't always build the best software.
... and in the process we will deteriorate the performance of millions of users and hurt our brand as a top class experience company?
Don't really care who is to blame, but they should have identified this, and either warn developers, or warn users. Or provide a tool for identifying guilty apps in your machine, and let users decide how to proceed.
> there's also just no reason to rewrite SQLite in another language. […] But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?
The SQLite developers are actually open to the idea of rewriting SQLite in Rust, so they must see an advantage to it:
> All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include: […] If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.
I think it’s the opposite. They want to atleast explore rewriting in Rust but are afraid of backlash. Hence why they’re open to private discussion. I can imagine they are split internally.
Now that is an interesting picture! I am far from being a UI expert, but I do dabble and i would not have thought both forms of zero could be used in the same HMI display to lower cognitive load.
It's not a 90% speedup, it's ~50% (still quite impressive).
The author seems to be confused, because the original jq is 1.9x slower than the optimized one.
That depends on how you're representing the speedup.
To travel 10 miles, at 60 MPH, takes 10 minutes. Make it 100% faster, at 120 MPH, and that time becomes 5 minutes. Travel just as far in 50% of the time. Or travel just as far 100% faster. The 90% speedup matches the reduction of the time it takes to nearly half (a 90% (projected) speedup, or about a 45% time reduction, as mathed out by kazinator `Projected speedup from both: 4.631/2.431 = 1.905`). Your claim that its closer to 50% is correct from a total time taken perspective, just coming at it from the other direction.
Professionalism at its finest!
reply