Hacker Newsnew | past | comments | ask | show | jobs | submit | biohazard2's commentslogin

The developer just "cleaned up the code comments", i.e. they removed all TODOs from the code: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd...

Professionalism at its finest!


LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation.

It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there?


I also use this as a simple heuristic:

https://github.com/nkuntz1934/matrix-workers/commits/main/

There exist only two commits. I've never seen a "real" project that looks like this.


To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.

I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.

I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.

I have a similar process. Internal repo where work gets done. External repo that only gets each release.

The repository is less than one week old though; having only the initial commit wouldn't shock me right away.

That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.

But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.

It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).

So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)


I usually work in branches in a private repo, squash and merge features / fixes in the private repo, and only merge the clean, verified, extensively tested merges back to public.

You don't need to see every single commit and the exact chronology of my work, snapshots is enough :)


I might just make dummy commits ("asdadasdassadas") in the prototyping phase and then just squash everything to an "Initial commit" afterwards.

Oh wow I'm at a loss for words.

To the author: see my comment at https://news.ycombinator.com/item?id=46782174, please also clean up that misaligned ASCII diagram at the top of the README, it's a dead tell.


Yeah deleting the TODOs like that is honestly a worse look.

Incoming force push to rewrite the history . Git doesn't lie!

I wouldn't put it past them...

I wouldn't put it in past tense...

Reminds me of Cloudflare's OAuth library for Workers.

>Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security

>To emphasize, this is not "vibe coded".

>Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.

...Some time later...

https://github.com/advisories/GHSA-4pc9-x2fx-p7vj


What is the learning here? There were humans involved in every step.

Things built with security in mind are not invulnerable, human written or otherwise.


Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.

This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.

Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)

And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?

This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.


This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.

If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.


the problem with "AI" is that by the very way it was trained: it produces plausible looking code

so the "reviewing" process will be looking for the needles in the haystack

when you have no understanding, or mental model of how it works, because there isn't one

it's a recipe for disaster for anything other than trivial projects


The learning is "they lied". After all, apart from marketing materials making a claim, where is the evidence?

Wait, we think they’re lying because an advisory was eventually found? We think that should be impossible with people involved?

Reading the necessary RFC is table stakes. Instead we got this:

>"NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

>"haha gpus go brrr"

(Those lines remain in the readme, even now: https://github.com/cloudflare/workers-oauth-provider?tab=rea...)


To me it's likely, given the extremely rudimentary nature of that issue.

If you're asking in good faith,

> Every line was thoroughly reviewed and cross-referenced with relevant RFCs

The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec.

When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied.


I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s


https://www.linkedin.com/in/nick-kuntz-61551869/

DevSecOps Engineer United States Army Special Operations Command · Full-time

Jun 2022 - Jul 2025 · 3 yrs 2 mos

Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.


Tbf, there is no one with a ‘serious DevSecOps background’. It’s an incredibly strong hint that the person is largely a goof.

Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.

This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.


Considering how many times I've heard "don't let perfection be the enemy of good enough" when the code I have is not only incomplete but doesn't even do most of the things asked (yet), I'd wager quite a lot

I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes.

Covering it up for sure. We all make mistakes. We all make idiots out of ourselves. But you have to take ownership and own up to move on.

Covering it up changes it from being dumb to being deceptive


Wow this is definitely not a software engineer. Hmm I wonder if Git stores history...

they actually rewrote the history later, but github shows force push history too https://github.com/nkuntz1934/matrix-workers/activity?activi...

No more vulnerabilities then I guess!

they should have at least rebased it and removed from git history

Hilarious. Judging by the username, it's the same person who wrote the slop blog post, too.


Fortunately, they still provide a way to restore the previous UI through a plugin – Classic UI: https://plugins.jetbrains.com/plugin/24468-classic-ui


>Europe is the only continent in the world to have a large public network of supercomputers that are managed by the EuroHPC Joint Undertaking (EuroHPC JU).

Who would have thought that Europe is the only continent to have a network of supercomputers managed by Europe⸮


Can we blame the Apple employees who apparently never tested their new OS release with any Electron-based application?


How else do you get the message across? Do not use the private APIs.

Electron is most likely using a whole ton more. Apple is sending a message. "Fix your crap or expect more."


I can think of multiple ways to pass the message to Electron developers:

- Open a GitHub issue explaining those private APIs shouldn't be used.

- Even better, open a PR fixing their use.

- Make those API calls a no-op if they come from an Electron app.

- Fix those API calls not to grind the OS to a halt for a seemingly simple visual effect.

- Create a public API allowing the same visual effect on a tested and documented API.

Choosing to (apparently violently) downgrade the user experience of all Electron app users, without a possibility to update at the launch day, if a deliberate decision and not an overlooked bug, is a rather shitty and user-hostile move, don't you think?


> - Make those API calls a no-op if they come from an Electron app.

Long-term, this is a maintenance nightmare. These hacks can stick around for decades, because there's no backpressure on downstream to actually fix things. It's not about "team velocity", it's about keeping yourself sane.

> - Open a GitHub issue explaining those private APIs shouldn't be used.

> - Even better, open a PR fixing their use.

Apple has a history/culture of secrecy. Whenever they provide public source code, it's a dump thrown over the fence. There is most likely some team inside that actually cares, but they can't "just" open an issue. My guess is that this is their message.

> [...] is a rather shitty and user-hostile move, don't you think?

Yes, I agree, the general direction they've been taking has been increasingly user-hostile for a very long time; let alone the developer story.

But sometimes there's also a perfectly reasonable excuse, from both "technical" and "organizational" POV. That's just my take, a skunkworks effort to get Electron to fix their crap. I would do the same.


The beta has been accessible to the public including the electron devs for 2+ months.


To be clear, Electron themselves fixed the bug quite quickly; but many Electron apps haven't pushed a version that vendors in the fixed version of the Electron runtime.

(And shit like this is exactly why runtimes like the JVM or the .NET CLR are designed to install separately from any particular software that uses them. Each of their minor [client-facing-ABI compatible] versions can then be independently updated to their latest OS-facing-bugfix version without waiting for the software itself to ship that update.)


How nice of Apple to take a huge UX/PR/User Satisfaction hit just to send a message.


Apple is consistent in their warnings to not use private APIs, and especially don't override them for custom implementations which is what Electron does here.

The _cornerMask override was a hack that shouldn't ever have existed in the first place, and it's not the only use of private APIs in the electron code base.

Apple is very clear about how they want you to make software for their OSes. It's 100% on electron that they choose to do it this way regardless.

I'd go as far as to say Electron itself is a hack that shouldn't exist, but sadly everyone has decided it's the only way they are going to make desktop software now.


This mindset is not conducive to loving your customers.


But I also blame users for using crappy electron apps ;-)


> How else do you get the message across? Do not use the private APIs.

The most effective way would be for Apple to actually seek feedback on requirements and then actually implement public APIs for functionality that people need.


That's confusing "consensus building" with "effective". Killing a private api is pretty effective. And consensus building doesn't always build the best software.


I think the consensus sought here is narrow enough.


... and in the process we will deteriorate the performance of millions of users and hurt our brand as a top class experience company?

Don't really care who is to blame, but they should have identified this, and either warn developers, or warn users. Or provide a tool for identifying guilty apps in your machine, and let users decide how to proceed.


> [...] they should have identified this, and either warn developers, or warn users.

Like I said, *this* is their warning.


And they did both, so…?


the reason for having a large public beta process would be to get broader testing that definitely should have found this


I’m glad they broke it. People that use private APIs in their apps must suffer.


> there's also just no reason to rewrite SQLite in another language. […] But why would we want to throw away the perfectly good C implementation, and why would we expect the C experts who have been carefully maintaining SQLite for a quarter century to be the ones to learn a new language and start over?

The SQLite developers are actually open to the idea of rewriting SQLite in Rust, so they must see an advantage to it:

> All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include: […] If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.


My theory is they wrote this just to get the ‘rewrite everything in Rust’ crowd off their backs.


I think it’s the opposite. They want to atleast explore rewriting in Rust but are afraid of backlash. Hence why they’re open to private discussion. I can imagine they are split internally.


I find it a bit too specific, because it won't get rid of the `rewrite everything in (Go|Zig|…)` crowds. But who knows…?


+1, I replaced my aging DS1812+ with a DXP4800 Plus and I've been quite happy with it.


It seems they are using the regular zero or a slashed variant depending on the risk of confusion: https://lii.enac.fr/wp-content/uploads/2021/08/B612-PolarSys...


Now that is an interesting picture! I am far from being a UI expert, but I do dabble and i would not have thought both forms of zero could be used in the same HMI display to lower cognitive load.

Very interesting! Thanks.


Different contractors, probably.


Wow that looks WAY better in the picture than in the various screenshots (and google fonts) we're all looking at. It looks very clean and legible.


Two articles providing more information about the creation of this font: https://lii.enac.fr/projects/definition-and-validation-of-an... https://www.enac.fr/fr/une-police-realisee-par-les-chercheur...

In particular, a screen of an Airbus screen and a video showing parts of the creation are provided.


Curiously, the photo of the screen shows slashed zeros, while the font sample shows non-slashed zeros.


I noticed the same thing. It's the first thing I check when someone describes a font as "legible." I want to see O0olI|i diplayed.


The photo actually shows some slashed and some non-slashed zeros too. Look at the PSI numbers on the left versus the time numbers in the center right.

That doesn't seem great from a UX standpoint.


codepoint E007 is a slashed zero... would be interesting to know why this is not the default


strangely its not included in the regular font face... only in the italic, bold, bold italic and monospace variant

0123456789


It's not a 90% speedup, it's ~50% (still quite impressive). The author seems to be confused, because the original jq is 1.9x slower than the optimized one.


That depends on how you're representing the speedup.

To travel 10 miles, at 60 MPH, takes 10 minutes. Make it 100% faster, at 120 MPH, and that time becomes 5 minutes. Travel just as far in 50% of the time. Or travel just as far 100% faster. The 90% speedup matches the reduction of the time it takes to nearly half (a 90% (projected) speedup, or about a 45% time reduction, as mathed out by kazinator `Projected speedup from both: 4.631/2.431 = 1.905`). Your claim that its closer to 50% is correct from a total time taken perspective, just coming at it from the other direction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: